Makita’s Handheld Powered Snow Thrower

https://s3files.core77.com/blog/images/1790938_81_140409_0_G8pt0a3.jpg

These days, companies like Stihl and Makita sell multi-heads. These are battery-powered motors that can drive a variety of common landscaping attachments, like string trimmers and hedge cutters.

Uniquely, Makita also offers this Snow Thrower attachment:

The business end is 12" wide and can handle a 6" depth of snow at a time. Tiltable vanes on the inside let you control whether you want to throw the snow to the left, to the right or straight ahead. The company says you can clear about five parking spaces with two 18V batteries.

So how well does it work? Seeing is believing. Here’s Murray Kruger of Kruger Construction putting it through its paces:

Core77

Automate the export of Amazon RDS for MySQL or Amazon Aurora MySQL audit logs to Amazon S3 with batching or near real-time processing

https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2026/01/12/DBBLOG-50081-1-1260×597.png

Audit logging has become a crucial component of database security and compliance, helping organizations track user activities, monitor data access patterns, and maintain detailed records for regulatory requirements and security investigations. Database audit logs provide a comprehensive trail of actions performed within the database, including queries executed, changes made to data, and user authentication attempts. Managing these logs is more straightforward with a robust storage solution such as Amazon Simple Storage Service (Amazon S3).

Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora MySQL-Compatible Edition provide built-in audit logging capabilities, but customers might need to export and store these logs for long-term retention and analysis. Amazon S3 offers an ideal destination, providing durability, cost-effectiveness, and integration with various analytics tools.

In this post, we explore two approaches for exporting MySQL audit logs to Amazon S3: either using batching with a native export to Amazon S3 or processing logs in real time with Amazon Data Firehose.

Solution overview

The first solution involves batch processing by using the built-in audit log export feature in Amazon RDS for MySQL or Aurora MySQL-Compatible to export logs to Amazon CloudWatch Logs. Amazon EventBridge periodically triggers an AWS Lambda function. This solution creates a CloudWatch export task that sends the last one days’s of audit logs to Amazon S3. The period (one day) is configurable based on your requirements. This solution is the most cost-effective and practical if you don’t require the audit logs to be available in real-time within an S3 bucket. The following diagram illustrates this workflow.

Overview of solution 1

The other proposed solution uses Data Firehose to immediately process the MySQL audit logs within CloudWatch Logs and send them to an S3 bucket. This approach is suitable for business use cases that require immediate export of audit logs when they’re available within CloudWatch Logs. The following diagram illustrates this workflow.

Overview of solution 2

Use cases

Once you’ve implemented either of these solutions, you’ll have your Aurora MySQL or RDS for MySQL audit logs stored securely in Amazon S3. This opens up a wealth of possibilities for analysis, monitoring, and compliance reporting. Here’s what you can do with your exported audit logs:

  • Run Amazon Athena queries: With your audit logs in S3, you can use Amazon Athena to run SQL queries directly against your log data. This allows you to quickly analyze user activities, identify unusual patterns, or generate compliance reports. For example, you could query for all actions performed by a specific user, or find all failed login attempts within a certain time frame.
  • Create Amazon Quick Sight dashboards: Using Amazon Quick Sight in conjunction with Athena, you can create visual dashboards of your audit log data. This can help you spot trends over time, such as peak usage hours, most active users, or frequently accessed database objects.
  • Set up automated alerting: By combining your S3-stored logs with AWS Lambda and Amazon SNS, you can create automated alerts for specific events. For instance, you could set up a system to notify security personnel if there’s an unusual spike in failed login attempts or if sensitive tables are accessed outside of business hours.
  • Perform long-term analysis: With your audit logs centralized in S3, you can perform long-term trend analysis. This could help you understand how database usage patterns change over time, informing capacity planning and security policies.
  • Meet compliance requirements: Many regulatory frameworks require retention and analysis of database audit logs. With your logs in S3, you can easily demonstrate compliance with these requirements, running reports as needed for auditors.

By leveraging these capabilities, you can turn your audit logs from a passive security measure into an active tool for database management, security enhancement, and business intelligence.

Comparing solutions

The first solution used EventBridge to periodically trigger a Lambda function. This function creates a CloudWatch Log export task that sends a batch of log data to Amazon S3 at regular intervals. This method is well-suited for scenarios where you prefer to process logs in batches to optimize costs and resources.

The second solution uses Data Firehose to create a real-time audit log processing pipeline. This approach streams logs directly from CloudWatch to an S3 bucket, providing near real-time access to your audit data. In this context, “real-time” means that log data is processed and delivered synchronously as it is generated, rather than being sent in a pre-defined interval. This solution is ideal for scenarios requiring immediate access to log data or for high-volume logging environments.

Whether you choose the near real-time streaming approach or the scheduled export method, you will be well-equipped to managed your Aurora MySQL and RDS for MySQL audit logs effectively.

Prerequisites for both solutions

Before getting started, complete the following prerequisites:

  1. Create or have an existing RDS for MySQL instance or Aurora MySQL cluster.
  2. Enable audit logging:
    1. For Amazon RDS, add the MariaDB Audit Plugin within your option group.
    2. For Aurora, enable Advanced Auditing within your parameter group.

Note: In audit logging, by default all users are logged which can potentially be costly.

  1. Publish MySQL audit logs to CloudWatch Logs.
  2. Make sure you have a terminal with the AWS Command Line Interface (AWS CLI) installed or use AWS CloudShell within your console.
  3. Create an S3 bucket to store the MySQL audit logs using the below AWS CLI command:

aws s3api create-bucket --bucket <bucket_name>

After the command is complete, you will see an output similar to the following:

Note: Each solution has specific service components which are discussed in their respective sections.

Solution #1: Peform audit log batch processing with EventBridge and Lambda

In this solution, we create a Lambda function to export your audit log to Amazon S3 based on the schedule you set using EventBridge Scheduler. This solution offers a cost-efficient way to transfer audit log files within an S3 bucket in a scheduled manner.

Create IAM role for EventBridge Scheduler

The first step is to create an AWS Identity and Access Management (IAM) role responsible for allowing EventBridge Scheduler to invoke the Lambda function we will create later. Complete the following steps to create this role:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Create a file named TrustPolicyForEventBridgeScheduler.json using your preferred text editor:

nano TrustPolicyForEventBridgeScheduler.json

  1. Insert the following trust policy into the JSON file:

Note: Make sure to amend SourceAccount before saving into a file. The condition is used to prevents unauthorized access from other AWS accounts.

  1. Create a file named PermissionsForEventBridgeScheduler.json using your preferred text editor:

nano PermissionsForEventBridgeScheduler.json

  1. Insert the following permissions into the JSON file:

Note: Replace <LambdaFunctionName> with the name of the function you’ll create later.

  1. Use the following AWS CLI command to create the IAM role for EventBridge Scheduler to invoke the Lambda function:
  1. Create the IAM policy and attach it to the previously created IAM role:

In this section, we created an IAM role with appropriate trust and permissions policies that allow EventBridge Scheduler to securely invoke Lambda functions from your AWS account. Next, we’ll create another IAM role that defines the permissions that your Lambda function needs to execute its tasks.

Create IAM role for Lambda

The next step is to create an IAM role responsible for allowing Lambda to put records from CloudWatch into your S3 bucket. Complete the following steps to create this role:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Create and write to a JSON file for the IAM trust policy using your preferred text editor:

nano TrustPolicyForLambda.json

  1. Insert the following trust policy into the JSON file:
  1. Use the following AWS CLI command to create the IAM role for Lambda to insert records from CloudWatch to Amazon S3:
  1. Create a file named PermissionsForLambda.json using your preferred text editor:

nano PermissionsForLambda.json

  1. Insert the following permissions into the JSON file:
  1. Create the IAM policy and attach it to the previously created IAM role:

Create ZIP file for the Python Lambda function

To create a file with the code the Lambda function will invoke, complete the following steps:

  1. Create and write to a file named lambda_function.py using your preferred text editor:

nano lambda_function.py

  1. Within the file, insert the following code:
import boto3
import os
import datetime
import logging
import time
from botocore.exceptions import ClientError, NoCredentialsError, BotoCoreError

logger = logging.getLogger()
logger.setLevel(logging.INFO)
def check_active_export_tasks(client):
    """Check for any active export tasks"""
    try:
        response = client.describe_export_tasks()
        active_tasks = [
            task for task in response.get('exportTasks', [])
            if task.get('status', {}).get('code') in ['RUNNING', 'PENDING']
        ]
        return active_tasks
    except ClientError as e:
        logger.error(f"Error checking active export tasks: {e}")
        return []
def wait_for_export_task_completion(client, max_wait_minutes=15, check_interval=60):
    """Wait for any active export tasks to complete"""
    max_wait_seconds = max_wait_minutes * 60
    waited_seconds = 0
    
    while waited_seconds < max_wait_seconds:
        active_tasks = check_active_export_tasks(client)
        
        if not active_tasks:
            logger.info("No active export tasks found, proceeding...")
            return True
        
        logger.info(f"Found {len(active_tasks)} active export task(s). Waiting {check_interval} seconds...")
        for task in active_tasks:
            task_id = task.get('taskId', 'Unknown')
            status = task.get('status', {}).get('code', 'Unknown')
            logger.info(f"Active task ID: {task_id}, Status: {status}")
        
        time.sleep(check_interval)
        waited_seconds += check_interval
    
    logger.warning(f"Timed out waiting for export tasks to complete after {max_wait_minutes} minutes")
    return False
def lambda_handler(event, context):
    try:
        
        required_env_vars = ['GROUP_NAME', 'DESTINATION_BUCKET', 'PREFIX', 'NDAYS']
        missing_vars = [var for var in required_env_vars if not os.environ.get(var)]
        
        if missing_vars:
            error_msg = f"Missing required environment variables: {', '.join(missing_vars)}"
            logger.error(error_msg)
            return {
                'statusCode': 400,
                'body': {'error': error_msg}
            }
        
        
        GROUP_NAME = os.environ['GROUP_NAME'].strip()
        DESTINATION_BUCKET = os.environ['DESTINATION_BUCKET'].strip()
        PREFIX = os.environ['PREFIX'].strip()
        NDAYS = os.environ['NDAYS'].strip()
        
        
        MAX_WAIT_MINUTES = int(os.environ.get('MAX_WAIT_MINUTES', '30'))
        CHECK_INTERVAL = int(os.environ.get('CHECK_INTERVAL', '60'))
        RETRY_ON_CONCURRENT = os.environ.get('RETRY_ON_CONCURRENT', 'true').lower() == 'true'
        
        
        if not all([GROUP_NAME, DESTINATION_BUCKET, PREFIX, NDAYS]):
            error_msg = "Environment variables cannot be empty"
            logger.error(error_msg)
            return {
                'statusCode': 400,
                'body': {'error': error_msg}
            }
        
        
        try:
            nDays = int(NDAYS)
            if nDays <= 0:
                raise ValueError("NDAYS must be a positive integer")
        except ValueError as e:
            error_msg = f"Invalid NDAYS value '{NDAYS}': {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 400,
                'body': {'error': error_msg}
            }
        
        
        try:
            currentTime = datetime.datetime.now()
            StartDate = currentTime - datetime.timedelta(days=nDays)
            EndDate = currentTime - datetime.timedelta(days=nDays - 1)
            
            fromDate = int(StartDate.timestamp() * 1000)
            toDate = int(EndDate.timestamp() * 1000)
            
            
            if fromDate >= toDate:
                raise ValueError("Invalid date range: fromDate must be less than toDate")
                
        except (ValueError, OverflowError) as e:
            error_msg = f"Date calculation error: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 400,
                'body': {'error': error_msg}
            }
        
        
        try:
            BUCKET_PREFIX = os.path.join(PREFIX, StartDate.strftime('%Y{0}%m{0}%d').format(os.path.sep))
        except Exception as e:
            error_msg = f"Error creating bucket prefix: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
        
        
        logger.info(f"Starting export task for log group: {GROUP_NAME}")
        logger.info(f"Date range: {StartDate.strftime('%Y-%m-%d')} to {EndDate.strftime('%Y-%m-%d')}")
        logger.info(f"Destination: s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}")
        
        
        try:
            client = boto3.client('logs')
        except NoCredentialsError:
            error_msg = "AWS credentials not found"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
        except Exception as e:
            error_msg = f"Error creating boto3 client: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
        
        
        if RETRY_ON_CONCURRENT:
            logger.info("Checking for active export tasks...")
            active_tasks = check_active_export_tasks(client)
            
            if active_tasks:
                logger.info(f"Found {len(active_tasks)} active export task(s). Waiting for completion...")
                if not wait_for_export_task_completion(client, MAX_WAIT_MINUTES, CHECK_INTERVAL):
                    return {
                        'statusCode': 409,
                        'body': {
                            'error': f'Active export task(s) still running after {MAX_WAIT_MINUTES} minutes',
                            'activeTaskCount': len(active_tasks)
                        }
                    }
        
        
        try:
            response = client.create_export_task(
                logGroupName=GROUP_NAME,
                fromTime=fromDate,
                to=toDate,
                destination=DESTINATION_BUCKET,
                destinationPrefix=BUCKET_PREFIX
            )
            
            task_id = response.get('taskId', 'Unknown')
            logger.info(f"Export task created successfully with ID: {task_id}")
            
            return {
                'statusCode': 200,
                'body': {
                    'message': 'Export task created successfully',
                    'taskId': task_id,
                    'logGroup': GROUP_NAME,
                    'fromDate': StartDate.isoformat(),
                    'toDate': EndDate.isoformat(),
                    'destination': f"s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}"
                }
            }
            
        except ClientError as e:
            error_code = e.response['Error']['Code']
            error_msg = e.response['Error']['Message']
            
            
            if error_code == 'ResourceNotFoundException':
                logger.error(f"Log group '{GROUP_NAME}' not found")
                return {
                    'statusCode': 404,
                    'body': {'error': f"Log group '{GROUP_NAME}' not found"}
                }
            elif error_code == 'LimitExceededException':
                
                logger.error(f"Export task limit exceeded (concurrent task running): {error_msg}")
                
                
                active_tasks = check_active_export_tasks(client)
                
                return {
                    'statusCode': 409,
                    'body': {
                        'error': 'Cannot create export task: Another export task is already running',
                        'details': error_msg,
                        'activeTaskCount': len(active_tasks),
                        'suggestion': 'Only one export task can run at a time. Please wait for the current task to complete or set RETRY_ON_CONCURRENT=true to auto-retry.'
                    }
                }
            elif error_code == 'InvalidParameterException':
                logger.error(f"Invalid parameter: {error_msg}")
                return {
                    'statusCode': 400,
                    'body': {'error': f"Invalid parameter: {error_msg}"}
                }
            elif error_code == 'AccessDeniedException':
                logger.error(f"Access denied: {error_msg}")
                return {
                    'statusCode': 403,
                    'body': {'error': f"Access denied: {error_msg}"}
                }
            else:
                logger.error(f"AWS ClientError ({error_code}): {error_msg}")
                return {
                    'statusCode': 500,
                    'body': {'error': f"AWS error: {error_msg}"}
                }
                
        except BotoCoreError as e:
            error_msg = f"BotoCore error: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
            
        except Exception as e:
            error_msg = f"Unexpected error creating export task: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
    
    except Exception as e:
        
        error_msg = f"Unexpected error in lambda_handler: {str(e)}"
        logger.error(error_msg, exc_info=True)
        return {
            'statusCode': 500,
            'body': {'error': 'Internal server error'}
        }
  1. Zip the file using the following command:

zip function.zip lambda_function.py

Create Lambda function

Complete the following steps to create a Lambda function:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Run the following command, which references the zip file previously created:

The NDAYS variable in the preceding command will determine the dates of audit logs exported per invocation of the Lambda function. For example, if you plan on exporting logs one time per day to Amazon S3, set NDAYS=1, as shown in the preceding command.

  1. Add concurrency limits to keep executions in control:

Note: Reserved concurrency in Lambda sets a fixed limit on how many instances of your function can run simultaneously, like having a specific number of workers for a task. In this database export scenario, we’re limiting it to 2 concurrent executions to prevent overwhelming the database, avoid API throttling, and ensure smooth, controlled exports. This limitation helps maintain system stability, prevents resource contention, and keeps costs in check

In this section, we created a Lambda function that will handle the CloudWatch log exports, configured its essential parameters including environment variables, and set a concurrency limit to ensure controlled execution. Next, we’ll create an EventBridge schedule that will automatically trigger this Lambda function at specified intervals to perform the log exports.

Create EventBridge schedule

Complete the following steps to create an EventBridge schedule to invoke the Lambda function at an interval of your choosing:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Run the following command:

The schedule-expression parameter in the preceding command must be equal to the environmental variable NDAYS in the previously created Lambda function.

This solution provides an efficient, scheduled approach to exporting RDS audit logs to Amazon S3 using AWS Lambda and EventBridge Scheduler. By leveraging these serverless components, we’ve created a cost-effective, automated system that periodically transfers audit logs to S3 for long-term storage and analysis. This method is particularly useful for organizations that need regular, batch-style exports of their database audit logs, allowing for easier compliance reporting and historical data analysis.

While the first solution offers a scheduled, batch-processing approach, some scenarios require a more real-time solution for audit log processing. In our next solution, we’ll explore how to create a near real-time audit log processing system using Amazon Kinesis Data Firehose. This approach will allow for continuous streaming of audit logs from RDS to S3, providing almost immediate access to log data.

Solution 2: Create near real-time audit log processing with Amazon Data Firehose

In this section, we review how to create a near real-time audit log export to Amazon S3 using the power of Data Firehose. With this solution, you can directly load the latest audit log files to an S3 bucket for quick analysis, manipulation, or other purposes.

Create IAM role for CloudWatch Logs

The first step is to create an IAM role responsible for allowing CloudWatch Logs to put records into the Firehose delivery stream (CWLtoDataFirehoseRole). Complete the following steps to create this role:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Create and write to a JSON file for the IAM trust policy using your preferred text editor:

nano TrustPolicyForCWL.json

  1. Insert the following trust policy into the JSON file:
  1. Create and write to a new JSON file for the IAM permissions policy using your preferred text editor:

nano PermissionsForCWL.json

  1. Insert the following permissions into the JSON file:
  1. Use the following AWS CLI command to create the IAM role for CloudWatch Logs to insert records into the Firehose delivery stream:
  1. Create the IAM policy and attach it to the previously created IAM role:

Create IAM role for Firehose delivery stream

The next step is to create an IAM role (DataFirehosetoS3Role) responsible for allowing the Firehose delivery stream to insert the audit logs into an S3 bucket. Complete the following steps to create this role:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Create and write to a JSON file for the IAM trust policy using your preferred text editor:

nano PermissionsForCWL.json

  1. Insert the following trust policy into the JSON file:
  1. Create and write to a new JSON file for the IAM permissions using your preferred text editor:

nano PermissionsForCWL.json

  1. Insert the following permissions into the JSON file:
  1. Use the following AWS CLI command to create the IAM role for Data Firehose to perform operations on the S3 bucket:
  1. Create the IAM policy and attach it to the previously created IAM role:

Create the Firehose delivery stream

Now you create the Firehose delivery stream to allow near real-time transfer of MySQL audit logs from CloudWatch Logs to your S3 bucket. Complete the following steps:

  1. Create the Firehose delivery stream with the following AWS CLI command. Setting the buffer interval and size determines how long your data is buffered before being delivered to the S3 bucket. For more information, refer to AWS documentation. In this example, we use the default values:
  1. Wait until the Firehose delivery stream becomes active (this might take a few minutes). You can use the Firehose CLI describe-delivery-stream command to check the status of the delivery stream. Note the DeliveryStreamDescription.DeliveryStreamARN value, to use in a later step:

aws firehose describe-delivery-stream --delivery-stream-name <delivery-stream-name>

  1. After the Firehose delivery stream is in an active state, create a CloudWatch Logs subscription filter. This subscription filter immediately starts the flow of near real-time log data from the chosen log group to your Firehose delivery stream. Make sure to provide the log group name that you want to push to Amazon S3 and properly copy the destination-arn of your Firehose delivery stream:

Your near real-time MySQL audit log solution is now properly configured and will begin delivering MySQL audit logs to your S3 bucket through the Firehose delivery stream.

Clean up

To clean up your resources, complete the following steps (depending on which solution you used):

  1. Delete the RDS instance or Aurora cluster.
  2. Delete the Lambda functions.
  3. Delete the EventBridge rule.
  4. Delete the S3 bucket.
  5. Delete the Firehose delivery stream.

Conclusion

In this post, we’ve presented two solutions for managing Aurora MySQL or RDS for MySQL audit logs, each offering unique benefits for different business use cases.

We encourage you to implement these solutions in your own environment and share your experiences, challenges, and success stories in the comments section. Your feedback and real-world implementations can help fellow AWS users choose and adapt these solutions to best fit their specific audit logging needs.


About the authors

Mahek Shah

Mahek is a Cloud Support Engineer I who has worked within the AWS database team for almost 2 years. Mahek is an Amazon Aurora MySQL and RDS MySQL subject matter expert with deep expertise in helping customers implement robust, high-performing, and secure database solutions within the AWS Cloud.

Ryan Moore

Ryan is a Technical Account Manager at AWS with three years of experience, having launched his career on the AWS database team. He is an Aurora MySQL and RDS MySQL subject matter expert that specializes in enabling customers to build performant, scalable, and secure architectures within the AWS Cloud.

Nirupam Datta

Nirupam is a Sr. Technical Account Manager at AWS. He has been with AWS for over 6 years. With over 14 years of experience in database engineering and infra-architecture, Nirupam is also a subject matter expert in the Amazon RDS core systems and Amazon RDS for SQL Server. He provides technical assistance to customers, guiding them to migrate, optimize, and navigate their journey in the AWS Cloud.

Planet for the MySQL Community

Laravel Toaster Magic v2.0 – The Theme Revolution

https://opengraph.githubassets.com/31857b29a46a84e2c6bbc712f5ec663b85c7a1b32aa7df4cc5c9e371828cb100/devrabiul/laravel-toaster-magic/releases/tag/v2.0

🌟 One Package, Infinite Possibilities

Laravel Toaster Magic is designed to be the only toaster package you’ll need for any type of Laravel project.
Whether you are building a corporate dashboard, a modern SaaS, a gaming platform, or a simple blog, I have crafted a theme that fits perfectly.

"One Package, Many Themes." — No need to switch libraries just to change the look.

This major release brings 7 stunning new themes, full Livewire v3/v4 support, and modern UI enhancements.


🚀 What’s New?

1. 🎨 7 Beautiful New Themes

I have completely redesigned the visual experience. You can now switch between 7 distinct themes by simply updating your config.

Theme Config Value Description
Default 'default' Clean, professional, and perfect for corporate apps.
Material 'material' Google Material Design inspired. Flat and bold.
iOS 'ios' (Fan Favorite) Apple-style notifications with backdrop blur and smooth bounce animations.
Glassmorphism 'glassmorphism' Trendy frosted glass effect with vibrant borders and semi-transparent backgrounds.
Neon 'neon' (Dark Mode Best) Cyberpunk-inspired with glowing neon borders and dark gradients.
Minimal 'minimal' Ultra-clean, distraction-free design with simple left-border accents.
Neumorphism 'neumorphism' Soft UI design with 3D embossed/debossed plastic-like shadows.

👉 How to use:

// config/laravel-toaster-magic.php
'theme' => 'neon', 

2. ⚡ Full Livewire v3 & v4 Support

I’ve rewritten the Javascript core to support Livewire v3 & v4 natively.

  • No more custom event listeners required manually.
  • Uses Livewire.on (v3) or standard event dispatching.
  • Works seamlessly with SPA mode and wire:navigate.
// Dispatch from component
$this->dispatch('toastMagic', 
    status: 'success', 
    message: 'User Saved!', 
    title: 'Great Job'
);

3. 🌈 Gradient Mode

Want your toasts to pop without changing the entire theme? Enable Gradient Mode to add a subtle "glow-from-within" gradient based on the toast type (Success, Error, etc.).

// config/laravel-toaster-magic.php
'gradient_enable' => true

Works best with Default, Material, Neon, and Glassmorphism themes.


4. 🎨 Color Mode

Don’t want themes? Just want solid colors? Color Mode forces the background of the toast to match its type (Green for Success, Red for Error, etc.), overriding theme backgrounds for high-visibility alerts.

// config/laravel-toaster-magic.php
'color_mode' => true

5. 🛠 Refactored CSS Architecture

I have completely modularized the CSS.

  • CSS Variables: All colors and values are now CSS variables, making runtime customization instant.
  • Scoped Styles: Themes are namespaced (.theme-neon, .theme-ios) to prevent conflicts.
  • Dark Mode: Native dark mode support via body[theme="dark"].

📋 Upgrade Guide

Upgrading from v1.x to v2.0?

  1. Update Composer:

    composer require devrabiul/laravel-toaster-magic "^2.0"
  2. Republish Assets (Critical for new CSS/JS):

    php artisan vendor:publish --tag=toast-magic-assets --force
  3. Check Config:
    If you have a published config file, add the new options:

    'options' => [
        'theme' => 'default',
        'gradient_enable' => false,
        'color_mode' => false,
    ],
    'livewire_version' => 'v3',

🏁 Conclusion

v2.0 transforms Laravel Toaster Magic from a simple notification library into a UI-first experience. Whether you’re building a sleek SaaS (use iOS), a gaming platform (use Neon), or an admin dashboard (use Material), there is likely a theme for you.

Enjoy the magic! 🍞✨


Laravel News Links

KeyPort Versa58 Swiss Army Upgrade System

https://theawesomer.com/photos/2026/01/keyport_versa58_swiss_army_accessories_t.jpg

KeyPort Versa58 Swiss Army Upgrade System

KeyPort’s latest creation is a modular upgrade system for standard 58mm Swiss Army Knives. At the heart of the Versa58 are its magnetic mounting plates, which let you easily snap tools on and off. The first modules include a mini flashlight, a retractable pen, a USB-C flash drive, a pocket clip, and a multi-purpose holder for a toothpick, tweezers, or ferro rod.

The Awesomer

MySQL 8.4 disables AHI – Why and What you need to know

MySQL 8.4 changed the InnoDB adaptive hash index (innodb_adaptive_hash_index) default from ON to OFF, a major shift after years of it being enabled by default. Note that the MySQL adaptive hash index (AHI) feature remains fully available and configurable.

This blog is me going down the rabbit hole so you don’t have to and present you what you actually need to know. I am sure you’re a great MySQLer know-it-all and you might want to skip this but DON’T, participate in bonus task towards the end.

Note that MariaDB already made this change in 10.5.4 (see MDEV-20487), so MySQL is doing nothing new! But why? Let me start with What(?) first!

What is Adaptive Hash Index in MySQL (AHI)

This has been discussed so many times, I’ll keep it short.

We know InnoDB uses B-trees for all indexes. A typical lookup requires traversing 3 – 4 levels: root > internal nodes > leaf page. For millions of rows, this is efficient but not instant.

AHI is an in-memory hash table that sits on top of your B-tree indexes. It monitors access patterns in real-time, and when it detects frequent lookups with the same search keys, it builds hash entries that map those keys directly to buffer pool pages.

So when next time the same search key is hit, instead of a multi-level B-tree traversal, you get a single hash lookup from the AHI memory section and direct jump to the buffer pool page giving you immediate data access.

FYI, AHI is part of InnoDB bufferpool.

What is “adaptive” in the “hash index”

InnoDB watches your workload and decides what to cache adaptively based on access patterns and lookup frequency. You don’t configure which indexes or keys to hash, InnoDB figures it out automatically. High-frequency lookups? AHI builds entries. Access patterns changes? AHI rebuilds the hash. It’s a self tuning optimization that adjusts to your actual runtime behavior and query patterns. That’s the adaptive-ness.

Sounds perfect, right? What’s the problem then?

The Problem(s) with AHI

– Overhead of AHI

AHI is optimal for frequently accessed pages but for non-frequent? The look-up path for such query is:

– Check AHI
– Check bufferpool
– Read from disk

For infrequent or random access patterns the AHI lookup isn’t useful, only to fall through to the regular B-tree path anyway. It causes you to spend memory search, comparison and burn CPU cycles.

– There is a latch on the AHI door

AHI is a shared data structure, though partitioned (innodb_adaptive_hash_index_parts), it has mutexes for controlled access. Thus when the concurrency increases, AHI may cause those threads blocking each other.

– The unpredictability of AHI

This appears to be the main reason for disabling the Adaptive Hash Index in MySQL 8.4. The optimizer needs to predict costs BEFORE the query runs. It has to decide: “Should I use index A or index B?”. AHI is dynamically built and is access (more frequently or less) dependent thus optimizer cannot predict a consistent query path.

The comments in this IndexLookupCost function section of cost_model.h explains it better, and I quote:

“With AHI enabled the cost of random lookups does not appear to be predictable using standard explanatory variables such as index height or the logarithm of the number of rows in the index.”

I encourage you to admire the explanation in the comments here: https://dev.mysql.com/doc/dev/mysql-server/latest/cost__model_8h_source.html

Why AHI Disabled in MySQL 8.4

I’d word it like this… the default change of InnoDB Adaptive Hash Index in MySQL 8.4 was driven by,
One: the realization that “favoring predictability” is more important than potential gains in specific scenarios and
Two: End users have the feature available and they can Enable it if they know/think it’d help them.

In my production experience, AHI frequently becomes a contention bottleneck under certain workloads, like write-heavy, highly concurrent or when active dataset is more than the buffer pool size. Disabling AHI ensures consistent response times and eliminates a common source of performance unpredictability”.

That comes to our next segment, what is that YOU need to do? and importantly, HOW?

The bottom line: MySQL 8.4 defaults to innodb_adaptive_hash_index=OFF. Before upgrading, verify whether AHI is actually helping your workload or quietly hurting it.

How to track MySQL AHI usage

Using the MySQL CLI

Use ENGINE INNODB STATUS command and look for the section that says “INSERT BUFFER AND ADAPTIVE HASH INDEX”:

SHOW ENGINE INNODB STATUS\G
8582.85 hash searches/s, 8518.85 non-hash searches/s

Here:
hash searches: Lookups served by AHI
non-hash searches: Regular B-tree lookups (after AHI search fails)

If your hash search rate is significantly higher, AHI is actively helping.
If the numbers for AHI are similar or lower, AHI isn’t providing much benefit.

Is AHI causing contention in MySQL?

In SHOW ENGINE INNODB STATUS look for wait events in SEMAPHORE section:

-Thread X has waited at btr0sea.ic line … seconds the semaphore:
S-lock on RW-latch at … created in file btr0sea.cc line …

If ENGINE INNODB STATUS shows many threads waiting on rw-latches created in btr0sea.c, it is the signs of Adaptive Hash index locking contention. That’s a sign for disabling it.
Refer: https://dev.mysql.com/doc/dev/mysql-server/latest/btr0sea_8cc.html

Monitoring AHI for MySQL

How about watching a chart that shows AHI efficiency? Percona Monitoring and Management makes visualization easy to decide on if that’s better for current workload. Here are 1000 words for you:

Bonus Task

Think you’ve got it about MySQL AHI here? Let’s do this task:

  1. Open pmmdemo.percona.com
  2. Go to Dashboards > MySQL > MySQL InnoDB Details
  3. Scroll down to “Innodb Adaptive Hash Index” section
  4. Answer this question in comments section: Which MySQL instances are better off without AHI?

Conclusion

AHI is a great idea and it works until it doesn’t. You’ve gotta do the homework, track usage, measure impact, then decide. Make sure you be ready for your upgrade.
If your monitoring shows consistently high hash search rates with minimal contention, you’re in the sweet spot, AHI should remain enabled. If not, innodb_adaptive_hash_index is good to remain OFF.
I recall a recent song verse that suits well on MySQL AHI: “I’m a king but I’m far from a saint” “It’s a blessing and a curse” (IUKUK)

Have you seen AHI help or hurt in your systems? What’s your plan for MySQL 8.4? I’d love to hear real-world experiences… the database community learns best when we share our war stories.

PS

Open source is beautiful, you can actually read the code (and comments) and understand the “why” behind decisions.

Planet for the MySQL Community

150+ SQL Commands Explained With Examples (2026 Update)

https://codeforgeek.com/wp-content/uploads/2026/01/150-SQL-Commands-Explained.pngIn this guide, we explain 150+ SQL commands in simple words, covering everything from basic queries to advanced functions for 2026. We cover almost every SQL command that exists in one single place, so you never have to go search for anything anywhere else. If you master these 150 commands, you will become an SQL […]Planet MySQL

Introducing MySQL Studio – Reducing the Barriers to Data Innovation

MySQL Studio in Oracle Cloud Infrastructure MySQL Studio in Oracle Cloud Infrastructure (OCI) is a unified environment for working with MySQL and HeatWave features through a single, streamlined interface. It brings SQL authoring, AI-assisted chat, and Jupyter-compatible notebooks together with project-based organization to help teams get from database setup to productive analytics faster. The same […]Planet MySQL

MySQL Performance Tuning: From Slow Queries to Lightning-Fast Database

https://media2.dev.to/dynamic/image/width=1000,height=500,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvd5exh6bx5tq40s8ye01.png

Database performance is often the bottleneck in web applications. This guide covers comprehensive MySQL optimization techniques from query-level improvements to server configuration tuning.



Understanding Query Execution

Before optimizing, understand how MySQL executes queries using EXPLAIN:

EXPLAIN SELECT 
    o.id,
    o.total,
    u.name,
    COUNT(oi.id) as item_count
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN order_items oi ON o.id = oi.order_id
WHERE o.status = 'completed'
    AND o.created_at > '2024-01-01'
GROUP BY o.id
ORDER BY o.created_at DESC
LIMIT 20;

Key EXPLAIN columns to watch: type (aim for ref or better), rows (lower is better), Extra (avoid "Using filesort" and "Using temporary").



EXPLAIN Output Analysis

+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
| id | select_type | table | type   | possible_keys | key     | key_len | ref              | rows | Extra       |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
|  1 | SIMPLE      | o     | range  | idx_status    | idx_... | 4       | NULL             | 5000 | Using where |
|  1 | SIMPLE      | u     | eq_ref | PRIMARY       | PRIMARY | 4       | mydb.o.user_id   |    1 | NULL        |
|  1 | SIMPLE      | oi    | ref    | idx_order     | idx_... | 4       | mydb.o.id        |    3 | Using index |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+



Indexing Strategies



Composite Index Design

Design indexes based on query patterns:

-- Query pattern: Filter by status, date range, sort by date
SELECT * FROM orders 
WHERE status = 'pending' 
AND created_at > '2024-01-01'
ORDER BY created_at DESC;

-- Optimal composite index (leftmost prefix rule)
CREATE INDEX idx_orders_status_created 
ON orders(status, created_at);

-- For queries with multiple equality conditions
SELECT * FROM products
WHERE category_id = 5
AND brand_id = 10
AND is_active = 1;

-- Index with most selective column first
CREATE INDEX idx_products_brand_cat_active
ON products(brand_id, category_id, is_active);



Covering Indexes

Avoid table lookups with covering indexes:

-- Query only needs specific columns
SELECT id, name, price FROM products
WHERE category_id = 5
ORDER BY price;

-- Covering index includes all needed columns
CREATE INDEX idx_products_covering
ON products(category_id, price, id, name);

-- MySQL can satisfy query entirely from index
-- EXPLAIN shows "Using index" in Extra column



Index for JOIN Operations

-- Ensure foreign keys are indexed
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_order_items_order_id ON order_items(order_id);
CREATE INDEX idx_order_items_product_id ON order_items(product_id);

-- For complex joins, index the join columns
SELECT p.name, SUM(oi.quantity) as total_sold
FROM products p
JOIN order_items oi ON p.id = oi.product_id
JOIN orders o ON oi.order_id = o.id
WHERE o.created_at > '2024-01-01'
GROUP BY p.id
ORDER BY total_sold DESC;

-- Indexes needed:
-- orders(created_at) - for WHERE filter
-- order_items(order_id) - for JOIN
-- order_items(product_id) - for JOIN

Don’t over-index! Each index slows down INSERT/UPDATE operations. Monitor unused indexes with sys.schema_unused_indexes.



Query Optimization Techniques



Avoiding Full Table Scans

-- Bad: Function on indexed column prevents index use
SELECT * FROM users WHERE YEAR(created_at) = 2024;

-- Good: Range query uses index
SELECT * FROM users 
WHERE created_at >= '2024-01-01' 
AND created_at < '2025-01-01';

-- Bad: Leading wildcard prevents index use
SELECT * FROM products WHERE name LIKE '%phone%';

-- Good: Trailing wildcard can use index
SELECT * FROM products WHERE name LIKE 'phone%';

-- For full-text search, use FULLTEXT index
ALTER TABLE products ADD FULLTEXT INDEX ft_name (name);
SELECT * FROM products WHERE MATCH(name) AGAINST('phone');



Optimizing Subqueries

-- Bad: Correlated subquery runs for each row
SELECT * FROM products p
WHERE price > (
    SELECT AVG(price) FROM products 
    WHERE category_id = p.category_id
);

-- Good: JOIN with derived table
SELECT p.* FROM products p
JOIN (
    SELECT category_id, AVG(price) as avg_price
    FROM products
    GROUP BY category_id
) cat_avg ON p.category_id = cat_avg.category_id
WHERE p.price > cat_avg.avg_price;

-- Even better: Window function (MySQL 8.0+)
SELECT * FROM (
    SELECT *, AVG(price) OVER (PARTITION BY category_id) as avg_price
    FROM products
) t WHERE price > avg_price;



Pagination Optimization

-- Bad: OFFSET scans and discards rows
SELECT * FROM products ORDER BY id LIMIT 10 OFFSET 100000;

-- Good: Keyset pagination (cursor-based)
SELECT * FROM products 
WHERE id > 100000  -- Last seen ID
ORDER BY id 
LIMIT 10;

-- For complex sorting, use deferred join
SELECT p.* FROM products p
JOIN (
    SELECT id FROM products
    ORDER BY created_at DESC, id DESC
    LIMIT 10 OFFSET 100000
) t ON p.id = t.id;



Server Configuration Tuning



InnoDB Buffer Pool

# my.cnf - For dedicated database server with 32GB RAM

[mysqld]
# Buffer pool should be 70-80% of available RAM
innodb_buffer_pool_size = 24G
innodb_buffer_pool_instances = 24

# Log file size affects recovery time vs write performance
innodb_log_file_size = 2G
innodb_log_buffer_size = 64M

# Flush settings (1 = safest, 2 = faster)
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT

# Thread concurrency
innodb_thread_concurrency = 0
innodb_read_io_threads = 8
innodb_write_io_threads = 8



Query Cache and Memory Settings

[mysqld]
# Connection handling
max_connections = 500
thread_cache_size = 100

# Memory per connection
sort_buffer_size = 4M
join_buffer_size = 4M
read_buffer_size = 2M
read_rnd_buffer_size = 8M

# Temporary tables
tmp_table_size = 256M
max_heap_table_size = 256M

# Table cache
table_open_cache = 4000
table_definition_cache = 2000



Monitoring and Profiling



Slow Query Log

[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1



Performance Schema Queries

-- Find top 10 slowest queries
SELECT 
    DIGEST_TEXT,
    COUNT_STAR as exec_count,
    ROUND(SUM_TIMER_WAIT/1000000000000, 2) as total_time_sec,
    ROUND(AVG_TIMER_WAIT/1000000000, 2) as avg_time_ms,
    SUM_ROWS_EXAMINED,
    SUM_ROWS_SENT
FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 10;

-- Find tables with most I/O
SELECT 
    object_schema,
    object_name,
    count_read,
    count_write,
    ROUND(sum_timer_read/1000000000000, 2) as read_time_sec,
    ROUND(sum_timer_write/1000000000000, 2) as write_time_sec
FROM performance_schema.table_io_waits_summary_by_table
ORDER BY sum_timer_wait DESC
LIMIT 10;

-- Find unused indexes
SELECT * FROM sys.schema_unused_indexes;

-- Find redundant indexes
SELECT * FROM sys.schema_redundant_indexes;



Real-time Monitoring

-- Current running queries
SELECT 
    id,
    user,
    host,
    db,
    command,
    time,
    state,
    LEFT(info, 100) as query
FROM information_schema.processlist
WHERE command != 'Sleep'
ORDER BY time DESC;

-- InnoDB status
SHOW ENGINE INNODB STATUS\G

-- Buffer pool hit ratio (should be > 99%)
SELECT 
    (1 - (
        (SELECT variable_value FROM performance_schema.global_status WHERE variable_name = 'Innodb_buffer_pool_reads') /
        (SELECT variable_value FROM performance_schema.global_status WHERE variable_name = 'Innodb_buffer_pool_read_requests')
    )) * 100 as buffer_pool_hit_ratio;



Partitioning for Large Tables

-- Range partitioning by date
CREATE TABLE orders (
    id BIGINT AUTO_INCREMENT,
    user_id INT NOT NULL,
    total DECIMAL(10,2),
    status VARCHAR(20),
    created_at DATETIME NOT NULL,
    PRIMARY KEY (id, created_at),
    INDEX idx_user (user_id, created_at)
) PARTITION BY RANGE (YEAR(created_at)) (
    PARTITION p2022 VALUES LESS THAN (2023),
    PARTITION p2023 VALUES LESS THAN (2024),
    PARTITION p2024 VALUES LESS THAN (2025),
    PARTITION p_future VALUES LESS THAN MAXVALUE
);

-- Queries automatically prune partitions
SELECT * FROM orders 
WHERE created_at >= '2024-01-01' 
AND created_at < '2024-07-01';
-- Only scans p2024 partition



Connection Pooling



Application-Level Pooling

// Node.js with mysql2
const mysql = require('mysql2/promise');

const pool = mysql.createPool({
  host: 'localhost',
  user: 'app_user',
  password: 'password',
  database: 'myapp',
  waitForConnections: true,
  connectionLimit: 20,
  queueLimit: 0,
  enableKeepAlive: true,
  keepAliveInitialDelay: 10000
});

// Use pool for queries
async function getUser(id) {
  const [rows] = await pool.execute(
    'SELECT * FROM users WHERE id = ?',
    [id]
  );
  return rows[0];
}



Conclusion

MySQL performance optimization is an iterative process. Start by identifying slow queries with the slow query log, analyze them with EXPLAIN, add appropriate indexes, and monitor the results. Server configuration should be tuned based on your workload characteristics and available resources.

Key takeaways:

  • Design indexes based on actual query patterns
  • Use EXPLAIN to understand query execution
  • Avoid functions on indexed columns in WHERE clauses
  • Configure InnoDB buffer pool appropriately
  • Monitor continuously with Performance Schema

Laravel News Links

EloSQL – Automatically Generate Migrations and Eloquent Models based on your SQL Database Schema

https://opengraph.githubassets.com/744ff4f3b9a5010fa8a9d56714cd15bf3cacdd32eaedb67499cb7811b54762c8/sepehr-mohseni/elosql

Tests
Latest Stable Version
Total Downloads
License
PHP Version

Elosql is a production-grade Laravel package that intelligently analyzes existing database schemas and generates precise migrations and Eloquent models. It supports MySQL, PostgreSQL, SQLite, and SQL Server, making it perfect for legacy database integration, reverse engineering, and rapid application scaffolding.

  • 🔍 Smart Schema Analysis – Automatically detects columns, indexes, foreign keys, and table relationships
  • 🚀 Multi-Database Support – Works with MySQL/MariaDB, PostgreSQL, SQLite, and SQL Server
  • 📁 Migration Generation – Creates Laravel migrations with proper dependency ordering
  • 🏗️ Model Scaffolding – Generates Eloquent models with relationships, casts, and fillable attributes
  • 🔗 Relationship Detection – Automatically detects belongsTo, hasMany, hasOne, belongsToMany, and polymorphic relationships
  • 📊 Schema Diff – Compare database schema with existing migrations
  • ⚙️ Highly Configurable – Customize every aspect of generation through config or command options
  • Production Ready – Comprehensive test suite with 90%+ coverage
  • PHP 8.1 or higher
  • Laravel 10.0 or 11.0

Install via Composer:

composer require sepehr-mohseni/elosql

The package will auto-register its service provider. Optionally, publish the configuration file:

php artisan vendor:publish --tag=elosql-config

Generate migrations and models for your entire database:

php artisan elosql:schema

See what will be generated without creating any files:

php artisan elosql:preview
php artisan elosql:migrations
php artisan elosql:models

The main command that generates both migrations and models.

php artisan elosql:schema [options]

Options:
  --connection=       Database connection to use (default: default connection)
  --table=            Generate for specific table(s), comma-separated
  --exclude=          Exclude specific table(s), comma-separated
  --migrations-path=  Custom path for migrations (default: database/migrations)
  --models-path=      Custom path for models (default: app/Models)
  --models-namespace= Custom namespace for models (default: App\Models)
  --no-migrations     Skip migration generation
  --no-models         Skip model generation
  --force             Overwrite existing files

Examples:

# Generate for specific tables
php artisan elosql:schema --table=users,posts,comments

# Exclude certain tables
php artisan elosql:schema --exclude=migrations,cache,sessions

# Custom output paths
php artisan elosql:schema --migrations-path=database/generated --models-path=app/Domain/Models

# Use a different database connection
php artisan elosql:schema --connection=legacy_db

Generate migration files from database schema.

php artisan elosql:migrations [options]

Options:
  --connection=   Database connection to use
  --table=        Generate for specific table(s)
  --exclude=      Exclude specific table(s)
  --path=         Custom output path
  --fresh         Generate fresh migrations (ignore existing)
  --diff          Only generate migrations for schema differences
  --force         Overwrite existing files

Examples:

# Generate migrations for a legacy database
php artisan elosql:migrations --connection=legacy --path=database/legacy-migrations

# Generate only new/changed tables
php artisan elosql:migrations --diff

Generate Eloquent model files.

php artisan elosql:models [options]

Options:
  --connection=   Database connection to use
  --table=        Generate for specific table(s)
  --exclude=      Exclude specific table(s)
  --path=         Custom output path
  --namespace=    Custom namespace
  --preview       Preview generated code without writing files
  --force         Overwrite existing files

Examples:

# Preview model generation
php artisan elosql:models --preview --table=users

# Generate with custom namespace
php artisan elosql:models --namespace="Domain\\User\\Models"

Preview the schema analysis without generating any files.

php artisan elosql:preview [options]

Options:
  --connection=   Database connection to use
  --table=        Preview specific table(s)
  --format=       Output format: table, json, yaml (default: table)

Examples:

# JSON output for processing
php artisan elosql:preview --format=json > schema.json

# View specific table structure
php artisan elosql:preview --table=users

Show differences between database schema and existing migrations.

php artisan elosql:diff [options]

Options:
  --connection=   Database connection to use
  --format=       Output format: table, json (default: table)

After publishing the config file (config/elosql.php), you can customize:

'connection' => env('ELOSQL_CONNECTION', null), // null = default connection
'exclude_tables' => [
    'migrations',
    'failed_jobs',
    'password_resets',
    'personal_access_tokens',
    'cache',
    'sessions',
],
'migrations' => [
    'path' => database_path('migrations'),
    'separate_foreign_keys' => true, // Generate FK migrations separately
    'include_drop_tables' => true,   // Include down() method
],
'models' => [
    'path' => app_path('Models'),
    'namespace' => 'App\\Models',
    'base_class' => \Illuminate\Database\Eloquent\Model::class,
    'use_guarded' => false,           // Use $guarded instead of $fillable
    'generate_phpdoc' => true,        // Generate PHPDoc blocks
    'detect_soft_deletes' => true,    // Auto-detect SoftDeletes trait
    'detect_timestamps' => true,      // Auto-detect timestamp columns
],

Customize how database types map to Laravel migration methods:

'type_mappings' => [
    'mysql' => [
        'tinyint(1)' => 'boolean',
        'json' => 'json',
        // Add custom mappings
    ],
    'pgsql' => [
        'jsonb' => 'jsonb',
        'uuid' => 'uuid',
    ],
],
'relationships' => [
    'detect_belongs_to' => true,
    'detect_has_many' => true,
    'detect_has_one' => true,
    'detect_belongs_to_many' => true,
    'detect_morph' => true,
    'pivot_table_patterns' => [
        // Regex patterns for detecting pivot tables
        '/^([a-z]+)_([a-z]+)$/',
    ],
],
<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    public function up(): void
    {
        Schema::create('posts', function (Blueprint $table) {
            $table->id();
            $table->foreignId('user_id')->constrained()->onDelete('cascade');
            $table->string('title', 255);
            $table->text('content');
            $table->enum('status', ['draft', 'published', 'archived'])->default('draft');
            $table->json('metadata')->nullable();
            $table->timestamps();
            $table->softDeletes();
            
            $table->index('status');
            $table->fullText('content');
        });
    }

    public function down(): void
    {
        Schema::dropIfExists('posts');
    }
};
<?php

declare(strict_types=1);

namespace App\Models;

use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Relations\BelongsTo;
use Illuminate\Database\Eloquent\Relations\BelongsToMany;
use Illuminate\Database\Eloquent\Relations\HasMany;
use Illuminate\Database\Eloquent\SoftDeletes;

/**
 * @property int $id
 * @property int $user_id
 * @property string $title
 * @property string $content
 * @property string $status
 * @property array|null $metadata
 * @property \Carbon\Carbon $created_at
 * @property \Carbon\Carbon $updated_at
 * @property \Carbon\Carbon|null $deleted_at
 * 
 * @property-read User $user
 * @property-read \Illuminate\Database\Eloquent\Collection|Comment[] $comments
 * @property-read \Illuminate\Database\Eloquent\Collection|Tag[] $tags
 */
class Post extends Model
{
    use SoftDeletes;

    protected $fillable = [
        'user_id',
        'title',
        'content',
        'status',
        'metadata',
    ];

    protected $casts = [
        'metadata' => 'array',
    ];

    public function user(): BelongsTo
    {
        return $this->belongsTo(User::class);
    }

    public function comments(): HasMany
    {
        return $this->hasMany(Comment::class);
    }

    public function tags(): BelongsToMany
    {
        return $this->belongsToMany(Tag::class, 'post_tag');
    }
}

You can also use Elosql programmatically:

use Sepehr_Mohseni\Elosql\Parsers\SchemaParserFactory;
use Sepehr_Mohseni\Elosql\Generators\MigrationGenerator;
use Sepehr_Mohseni\Elosql\Generators\ModelGenerator;

// Get the parser for your database
$parser = app(SchemaParserFactory::class)->make('mysql');

// Parse all tables
$tables = $parser->getTables();

// Or parse specific tables
$tables = $parser->getTables([
    'include' => ['users', 'posts'],
    'exclude' => ['migrations'],
]);

// Generate migrations
$migrationGenerator = app(MigrationGenerator::class);
$files = $migrationGenerator->generateAll($tables, 'mysql', database_path('migrations'));

// Generate models
$modelGenerator = app(ModelGenerator::class);
foreach ($tables as $table) {
    $content = $modelGenerator->generate($table, 'mysql', $tables);
    // Write to file or process as needed
}

Elosql handles foreign key dependencies intelligently:

  1. Dependency Resolution – Tables are ordered based on their foreign key dependencies using topological sorting
  2. Separate FK Migrations – Foreign keys are generated in separate migration files that run after all tables are created
  3. Circular Dependencies – Detected and reported with suggestions for resolution

This ensures migrations can be run without foreign key constraint violations.

  • Integers: tinyint, smallint, mediumint, int, bigint
  • Floating point: float, double, decimal
  • Strings: char, varchar, text, mediumtext, longtext
  • Binary: binary, varbinary, blob
  • Date/Time: date, datetime, timestamp, time, year
  • Special: json, enum, set, boolean
  • Spatial: point, linestring, polygon, geometry
  • All standard types plus: uuid, jsonb, inet, macaddr, cidr
  • Array types
  • Range types
  • integer, real, text, blob, numeric
  • All standard types plus: uniqueidentifier, nvarchar, ntext

Run the test suite:

Run with coverage:

Run static analysis:

Fix code style:

Contributions are welcome! Please see CONTRIBUTING.md for details.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

If you discover any security-related issues, please email isepehrmohseni@gmail.com instead of using the issue tracker.

The MIT License (MIT). Please see License File for more information.

Laravel News Links

Flying Through a Computer Chip

https://theawesomer.com/photos/2026/01/flying_through_a_computer_chip_t.jpg

Flying Through a Computer Chip

Epic Spaceman takes us on a journey through a smartphone’s main processing unit by enlarging a computer chip to the size of Manhattan and flying through it with his digital avatar. It’s mind-blowing when you realize just how much computing power and engineering complexity fits inside a chip the size of a fingernail. For more, check out his collab with MKBHD.

The Awesomer