KeyPort Versa58 Swiss Army Upgrade System

https://theawesomer.com/photos/2026/01/keyport_versa58_swiss_army_accessories_t.jpg

KeyPort Versa58 Swiss Army Upgrade System

KeyPort’s latest creation is a modular upgrade system for standard 58mm Swiss Army Knives. At the heart of the Versa58 are its magnetic mounting plates, which let you easily snap tools on and off. The first modules include a mini flashlight, a retractable pen, a USB-C flash drive, a pocket clip, and a multi-purpose holder for a toothpick, tweezers, or ferro rod.

The Awesomer

MySQL 8.4 disables AHI – Why and What you need to know

MySQL 8.4 changed the InnoDB adaptive hash index (innodb_adaptive_hash_index) default from ON to OFF, a major shift after years of it being enabled by default. Note that the MySQL adaptive hash index (AHI) feature remains fully available and configurable.

This blog is me going down the rabbit hole so you don’t have to and present you what you actually need to know. I am sure you’re a great MySQLer know-it-all and you might want to skip this but DON’T, participate in bonus task towards the end.

Note that MariaDB already made this change in 10.5.4 (see MDEV-20487), so MySQL is doing nothing new! But why? Let me start with What(?) first!

What is Adaptive Hash Index in MySQL (AHI)

This has been discussed so many times, I’ll keep it short.

We know InnoDB uses B-trees for all indexes. A typical lookup requires traversing 3 – 4 levels: root > internal nodes > leaf page. For millions of rows, this is efficient but not instant.

AHI is an in-memory hash table that sits on top of your B-tree indexes. It monitors access patterns in real-time, and when it detects frequent lookups with the same search keys, it builds hash entries that map those keys directly to buffer pool pages.

So when next time the same search key is hit, instead of a multi-level B-tree traversal, you get a single hash lookup from the AHI memory section and direct jump to the buffer pool page giving you immediate data access.

FYI, AHI is part of InnoDB bufferpool.

What is “adaptive” in the “hash index”

InnoDB watches your workload and decides what to cache adaptively based on access patterns and lookup frequency. You don’t configure which indexes or keys to hash, InnoDB figures it out automatically. High-frequency lookups? AHI builds entries. Access patterns changes? AHI rebuilds the hash. It’s a self tuning optimization that adjusts to your actual runtime behavior and query patterns. That’s the adaptive-ness.

Sounds perfect, right? What’s the problem then?

The Problem(s) with AHI

– Overhead of AHI

AHI is optimal for frequently accessed pages but for non-frequent? The look-up path for such query is:

– Check AHI
– Check bufferpool
– Read from disk

For infrequent or random access patterns the AHI lookup isn’t useful, only to fall through to the regular B-tree path anyway. It causes you to spend memory search, comparison and burn CPU cycles.

– There is a latch on the AHI door

AHI is a shared data structure, though partitioned (innodb_adaptive_hash_index_parts), it has mutexes for controlled access. Thus when the concurrency increases, AHI may cause those threads blocking each other.

– The unpredictability of AHI

This appears to be the main reason for disabling the Adaptive Hash Index in MySQL 8.4. The optimizer needs to predict costs BEFORE the query runs. It has to decide: “Should I use index A or index B?”. AHI is dynamically built and is access (more frequently or less) dependent thus optimizer cannot predict a consistent query path.

The comments in this IndexLookupCost function section of cost_model.h explains it better, and I quote:

“With AHI enabled the cost of random lookups does not appear to be predictable using standard explanatory variables such as index height or the logarithm of the number of rows in the index.”

I encourage you to admire the explanation in the comments here: https://dev.mysql.com/doc/dev/mysql-server/latest/cost__model_8h_source.html

Why AHI Disabled in MySQL 8.4

I’d word it like this… the default change of InnoDB Adaptive Hash Index in MySQL 8.4 was driven by,
One: the realization that “favoring predictability” is more important than potential gains in specific scenarios and
Two: End users have the feature available and they can Enable it if they know/think it’d help them.

In my production experience, AHI frequently becomes a contention bottleneck under certain workloads, like write-heavy, highly concurrent or when active dataset is more than the buffer pool size. Disabling AHI ensures consistent response times and eliminates a common source of performance unpredictability”.

That comes to our next segment, what is that YOU need to do? and importantly, HOW?

The bottom line: MySQL 8.4 defaults to innodb_adaptive_hash_index=OFF. Before upgrading, verify whether AHI is actually helping your workload or quietly hurting it.

How to track MySQL AHI usage

Using the MySQL CLI

Use ENGINE INNODB STATUS command and look for the section that says “INSERT BUFFER AND ADAPTIVE HASH INDEX”:

SHOW ENGINE INNODB STATUS\G
8582.85 hash searches/s, 8518.85 non-hash searches/s

Here:
hash searches: Lookups served by AHI
non-hash searches: Regular B-tree lookups (after AHI search fails)

If your hash search rate is significantly higher, AHI is actively helping.
If the numbers for AHI are similar or lower, AHI isn’t providing much benefit.

Is AHI causing contention in MySQL?

In SHOW ENGINE INNODB STATUS look for wait events in SEMAPHORE section:

-Thread X has waited at btr0sea.ic line … seconds the semaphore:
S-lock on RW-latch at … created in file btr0sea.cc line …

If ENGINE INNODB STATUS shows many threads waiting on rw-latches created in btr0sea.c, it is the signs of Adaptive Hash index locking contention. That’s a sign for disabling it.
Refer: https://dev.mysql.com/doc/dev/mysql-server/latest/btr0sea_8cc.html

Monitoring AHI for MySQL

How about watching a chart that shows AHI efficiency? Percona Monitoring and Management makes visualization easy to decide on if that’s better for current workload. Here are 1000 words for you:

Bonus Task

Think you’ve got it about MySQL AHI here? Let’s do this task:

  1. Open pmmdemo.percona.com
  2. Go to Dashboards > MySQL > MySQL InnoDB Details
  3. Scroll down to “Innodb Adaptive Hash Index” section
  4. Answer this question in comments section: Which MySQL instances are better off without AHI?

Conclusion

AHI is a great idea and it works until it doesn’t. You’ve gotta do the homework, track usage, measure impact, then decide. Make sure you be ready for your upgrade.
If your monitoring shows consistently high hash search rates with minimal contention, you’re in the sweet spot, AHI should remain enabled. If not, innodb_adaptive_hash_index is good to remain OFF.
I recall a recent song verse that suits well on MySQL AHI: “I’m a king but I’m far from a saint” “It’s a blessing and a curse” (IUKUK)

Have you seen AHI help or hurt in your systems? What’s your plan for MySQL 8.4? I’d love to hear real-world experiences… the database community learns best when we share our war stories.

PS

Open source is beautiful, you can actually read the code (and comments) and understand the “why” behind decisions.

Planet for the MySQL Community

150+ SQL Commands Explained With Examples (2026 Update)

https://codeforgeek.com/wp-content/uploads/2026/01/150-SQL-Commands-Explained.pngIn this guide, we explain 150+ SQL commands in simple words, covering everything from basic queries to advanced functions for 2026. We cover almost every SQL command that exists in one single place, so you never have to go search for anything anywhere else. If you master these 150 commands, you will become an SQL […]Planet MySQL

Introducing MySQL Studio – Reducing the Barriers to Data Innovation

MySQL Studio in Oracle Cloud Infrastructure MySQL Studio in Oracle Cloud Infrastructure (OCI) is a unified environment for working with MySQL and HeatWave features through a single, streamlined interface. It brings SQL authoring, AI-assisted chat, and Jupyter-compatible notebooks together with project-based organization to help teams get from database setup to productive analytics faster. The same […]Planet MySQL

MySQL Performance Tuning: From Slow Queries to Lightning-Fast Database

https://media2.dev.to/dynamic/image/width=1000,height=500,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvd5exh6bx5tq40s8ye01.png

Database performance is often the bottleneck in web applications. This guide covers comprehensive MySQL optimization techniques from query-level improvements to server configuration tuning.



Understanding Query Execution

Before optimizing, understand how MySQL executes queries using EXPLAIN:

EXPLAIN SELECT 
    o.id,
    o.total,
    u.name,
    COUNT(oi.id) as item_count
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN order_items oi ON o.id = oi.order_id
WHERE o.status = 'completed'
    AND o.created_at > '2024-01-01'
GROUP BY o.id
ORDER BY o.created_at DESC
LIMIT 20;

Key EXPLAIN columns to watch: type (aim for ref or better), rows (lower is better), Extra (avoid "Using filesort" and "Using temporary").



EXPLAIN Output Analysis

+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
| id | select_type | table | type   | possible_keys | key     | key_len | ref              | rows | Extra       |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
|  1 | SIMPLE      | o     | range  | idx_status    | idx_... | 4       | NULL             | 5000 | Using where |
|  1 | SIMPLE      | u     | eq_ref | PRIMARY       | PRIMARY | 4       | mydb.o.user_id   |    1 | NULL        |
|  1 | SIMPLE      | oi    | ref    | idx_order     | idx_... | 4       | mydb.o.id        |    3 | Using index |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+



Indexing Strategies



Composite Index Design

Design indexes based on query patterns:

-- Query pattern: Filter by status, date range, sort by date
SELECT * FROM orders 
WHERE status = 'pending' 
AND created_at > '2024-01-01'
ORDER BY created_at DESC;

-- Optimal composite index (leftmost prefix rule)
CREATE INDEX idx_orders_status_created 
ON orders(status, created_at);

-- For queries with multiple equality conditions
SELECT * FROM products
WHERE category_id = 5
AND brand_id = 10
AND is_active = 1;

-- Index with most selective column first
CREATE INDEX idx_products_brand_cat_active
ON products(brand_id, category_id, is_active);



Covering Indexes

Avoid table lookups with covering indexes:

-- Query only needs specific columns
SELECT id, name, price FROM products
WHERE category_id = 5
ORDER BY price;

-- Covering index includes all needed columns
CREATE INDEX idx_products_covering
ON products(category_id, price, id, name);

-- MySQL can satisfy query entirely from index
-- EXPLAIN shows "Using index" in Extra column



Index for JOIN Operations

-- Ensure foreign keys are indexed
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_order_items_order_id ON order_items(order_id);
CREATE INDEX idx_order_items_product_id ON order_items(product_id);

-- For complex joins, index the join columns
SELECT p.name, SUM(oi.quantity) as total_sold
FROM products p
JOIN order_items oi ON p.id = oi.product_id
JOIN orders o ON oi.order_id = o.id
WHERE o.created_at > '2024-01-01'
GROUP BY p.id
ORDER BY total_sold DESC;

-- Indexes needed:
-- orders(created_at) - for WHERE filter
-- order_items(order_id) - for JOIN
-- order_items(product_id) - for JOIN

Don’t over-index! Each index slows down INSERT/UPDATE operations. Monitor unused indexes with sys.schema_unused_indexes.



Query Optimization Techniques



Avoiding Full Table Scans

-- Bad: Function on indexed column prevents index use
SELECT * FROM users WHERE YEAR(created_at) = 2024;

-- Good: Range query uses index
SELECT * FROM users 
WHERE created_at >= '2024-01-01' 
AND created_at < '2025-01-01';

-- Bad: Leading wildcard prevents index use
SELECT * FROM products WHERE name LIKE '%phone%';

-- Good: Trailing wildcard can use index
SELECT * FROM products WHERE name LIKE 'phone%';

-- For full-text search, use FULLTEXT index
ALTER TABLE products ADD FULLTEXT INDEX ft_name (name);
SELECT * FROM products WHERE MATCH(name) AGAINST('phone');



Optimizing Subqueries

-- Bad: Correlated subquery runs for each row
SELECT * FROM products p
WHERE price > (
    SELECT AVG(price) FROM products 
    WHERE category_id = p.category_id
);

-- Good: JOIN with derived table
SELECT p.* FROM products p
JOIN (
    SELECT category_id, AVG(price) as avg_price
    FROM products
    GROUP BY category_id
) cat_avg ON p.category_id = cat_avg.category_id
WHERE p.price > cat_avg.avg_price;

-- Even better: Window function (MySQL 8.0+)
SELECT * FROM (
    SELECT *, AVG(price) OVER (PARTITION BY category_id) as avg_price
    FROM products
) t WHERE price > avg_price;



Pagination Optimization

-- Bad: OFFSET scans and discards rows
SELECT * FROM products ORDER BY id LIMIT 10 OFFSET 100000;

-- Good: Keyset pagination (cursor-based)
SELECT * FROM products 
WHERE id > 100000  -- Last seen ID
ORDER BY id 
LIMIT 10;

-- For complex sorting, use deferred join
SELECT p.* FROM products p
JOIN (
    SELECT id FROM products
    ORDER BY created_at DESC, id DESC
    LIMIT 10 OFFSET 100000
) t ON p.id = t.id;



Server Configuration Tuning



InnoDB Buffer Pool

# my.cnf - For dedicated database server with 32GB RAM

[mysqld]
# Buffer pool should be 70-80% of available RAM
innodb_buffer_pool_size = 24G
innodb_buffer_pool_instances = 24

# Log file size affects recovery time vs write performance
innodb_log_file_size = 2G
innodb_log_buffer_size = 64M

# Flush settings (1 = safest, 2 = faster)
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT

# Thread concurrency
innodb_thread_concurrency = 0
innodb_read_io_threads = 8
innodb_write_io_threads = 8



Query Cache and Memory Settings

[mysqld]
# Connection handling
max_connections = 500
thread_cache_size = 100

# Memory per connection
sort_buffer_size = 4M
join_buffer_size = 4M
read_buffer_size = 2M
read_rnd_buffer_size = 8M

# Temporary tables
tmp_table_size = 256M
max_heap_table_size = 256M

# Table cache
table_open_cache = 4000
table_definition_cache = 2000



Monitoring and Profiling



Slow Query Log

[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1



Performance Schema Queries

-- Find top 10 slowest queries
SELECT 
    DIGEST_TEXT,
    COUNT_STAR as exec_count,
    ROUND(SUM_TIMER_WAIT/1000000000000, 2) as total_time_sec,
    ROUND(AVG_TIMER_WAIT/1000000000, 2) as avg_time_ms,
    SUM_ROWS_EXAMINED,
    SUM_ROWS_SENT
FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 10;

-- Find tables with most I/O
SELECT 
    object_schema,
    object_name,
    count_read,
    count_write,
    ROUND(sum_timer_read/1000000000000, 2) as read_time_sec,
    ROUND(sum_timer_write/1000000000000, 2) as write_time_sec
FROM performance_schema.table_io_waits_summary_by_table
ORDER BY sum_timer_wait DESC
LIMIT 10;

-- Find unused indexes
SELECT * FROM sys.schema_unused_indexes;

-- Find redundant indexes
SELECT * FROM sys.schema_redundant_indexes;



Real-time Monitoring

-- Current running queries
SELECT 
    id,
    user,
    host,
    db,
    command,
    time,
    state,
    LEFT(info, 100) as query
FROM information_schema.processlist
WHERE command != 'Sleep'
ORDER BY time DESC;

-- InnoDB status
SHOW ENGINE INNODB STATUS\G

-- Buffer pool hit ratio (should be > 99%)
SELECT 
    (1 - (
        (SELECT variable_value FROM performance_schema.global_status WHERE variable_name = 'Innodb_buffer_pool_reads') /
        (SELECT variable_value FROM performance_schema.global_status WHERE variable_name = 'Innodb_buffer_pool_read_requests')
    )) * 100 as buffer_pool_hit_ratio;



Partitioning for Large Tables

-- Range partitioning by date
CREATE TABLE orders (
    id BIGINT AUTO_INCREMENT,
    user_id INT NOT NULL,
    total DECIMAL(10,2),
    status VARCHAR(20),
    created_at DATETIME NOT NULL,
    PRIMARY KEY (id, created_at),
    INDEX idx_user (user_id, created_at)
) PARTITION BY RANGE (YEAR(created_at)) (
    PARTITION p2022 VALUES LESS THAN (2023),
    PARTITION p2023 VALUES LESS THAN (2024),
    PARTITION p2024 VALUES LESS THAN (2025),
    PARTITION p_future VALUES LESS THAN MAXVALUE
);

-- Queries automatically prune partitions
SELECT * FROM orders 
WHERE created_at >= '2024-01-01' 
AND created_at < '2024-07-01';
-- Only scans p2024 partition



Connection Pooling



Application-Level Pooling

// Node.js with mysql2
const mysql = require('mysql2/promise');

const pool = mysql.createPool({
  host: 'localhost',
  user: 'app_user',
  password: 'password',
  database: 'myapp',
  waitForConnections: true,
  connectionLimit: 20,
  queueLimit: 0,
  enableKeepAlive: true,
  keepAliveInitialDelay: 10000
});

// Use pool for queries
async function getUser(id) {
  const [rows] = await pool.execute(
    'SELECT * FROM users WHERE id = ?',
    [id]
  );
  return rows[0];
}



Conclusion

MySQL performance optimization is an iterative process. Start by identifying slow queries with the slow query log, analyze them with EXPLAIN, add appropriate indexes, and monitor the results. Server configuration should be tuned based on your workload characteristics and available resources.

Key takeaways:

  • Design indexes based on actual query patterns
  • Use EXPLAIN to understand query execution
  • Avoid functions on indexed columns in WHERE clauses
  • Configure InnoDB buffer pool appropriately
  • Monitor continuously with Performance Schema

Laravel News Links

EloSQL – Automatically Generate Migrations and Eloquent Models based on your SQL Database Schema

https://opengraph.githubassets.com/744ff4f3b9a5010fa8a9d56714cd15bf3cacdd32eaedb67499cb7811b54762c8/sepehr-mohseni/elosql

Tests
Latest Stable Version
Total Downloads
License
PHP Version

Elosql is a production-grade Laravel package that intelligently analyzes existing database schemas and generates precise migrations and Eloquent models. It supports MySQL, PostgreSQL, SQLite, and SQL Server, making it perfect for legacy database integration, reverse engineering, and rapid application scaffolding.

  • 🔍 Smart Schema Analysis – Automatically detects columns, indexes, foreign keys, and table relationships
  • 🚀 Multi-Database Support – Works with MySQL/MariaDB, PostgreSQL, SQLite, and SQL Server
  • 📁 Migration Generation – Creates Laravel migrations with proper dependency ordering
  • 🏗️ Model Scaffolding – Generates Eloquent models with relationships, casts, and fillable attributes
  • 🔗 Relationship Detection – Automatically detects belongsTo, hasMany, hasOne, belongsToMany, and polymorphic relationships
  • 📊 Schema Diff – Compare database schema with existing migrations
  • ⚙️ Highly Configurable – Customize every aspect of generation through config or command options
  • Production Ready – Comprehensive test suite with 90%+ coverage
  • PHP 8.1 or higher
  • Laravel 10.0 or 11.0

Install via Composer:

composer require sepehr-mohseni/elosql

The package will auto-register its service provider. Optionally, publish the configuration file:

php artisan vendor:publish --tag=elosql-config

Generate migrations and models for your entire database:

php artisan elosql:schema

See what will be generated without creating any files:

php artisan elosql:preview
php artisan elosql:migrations
php artisan elosql:models

The main command that generates both migrations and models.

php artisan elosql:schema [options]

Options:
  --connection=       Database connection to use (default: default connection)
  --table=            Generate for specific table(s), comma-separated
  --exclude=          Exclude specific table(s), comma-separated
  --migrations-path=  Custom path for migrations (default: database/migrations)
  --models-path=      Custom path for models (default: app/Models)
  --models-namespace= Custom namespace for models (default: App\Models)
  --no-migrations     Skip migration generation
  --no-models         Skip model generation
  --force             Overwrite existing files

Examples:

# Generate for specific tables
php artisan elosql:schema --table=users,posts,comments

# Exclude certain tables
php artisan elosql:schema --exclude=migrations,cache,sessions

# Custom output paths
php artisan elosql:schema --migrations-path=database/generated --models-path=app/Domain/Models

# Use a different database connection
php artisan elosql:schema --connection=legacy_db

Generate migration files from database schema.

php artisan elosql:migrations [options]

Options:
  --connection=   Database connection to use
  --table=        Generate for specific table(s)
  --exclude=      Exclude specific table(s)
  --path=         Custom output path
  --fresh         Generate fresh migrations (ignore existing)
  --diff          Only generate migrations for schema differences
  --force         Overwrite existing files

Examples:

# Generate migrations for a legacy database
php artisan elosql:migrations --connection=legacy --path=database/legacy-migrations

# Generate only new/changed tables
php artisan elosql:migrations --diff

Generate Eloquent model files.

php artisan elosql:models [options]

Options:
  --connection=   Database connection to use
  --table=        Generate for specific table(s)
  --exclude=      Exclude specific table(s)
  --path=         Custom output path
  --namespace=    Custom namespace
  --preview       Preview generated code without writing files
  --force         Overwrite existing files

Examples:

# Preview model generation
php artisan elosql:models --preview --table=users

# Generate with custom namespace
php artisan elosql:models --namespace="Domain\\User\\Models"

Preview the schema analysis without generating any files.

php artisan elosql:preview [options]

Options:
  --connection=   Database connection to use
  --table=        Preview specific table(s)
  --format=       Output format: table, json, yaml (default: table)

Examples:

# JSON output for processing
php artisan elosql:preview --format=json > schema.json

# View specific table structure
php artisan elosql:preview --table=users

Show differences between database schema and existing migrations.

php artisan elosql:diff [options]

Options:
  --connection=   Database connection to use
  --format=       Output format: table, json (default: table)

After publishing the config file (config/elosql.php), you can customize:

'connection' => env('ELOSQL_CONNECTION', null), // null = default connection
'exclude_tables' => [
    'migrations',
    'failed_jobs',
    'password_resets',
    'personal_access_tokens',
    'cache',
    'sessions',
],
'migrations' => [
    'path' => database_path('migrations'),
    'separate_foreign_keys' => true, // Generate FK migrations separately
    'include_drop_tables' => true,   // Include down() method
],
'models' => [
    'path' => app_path('Models'),
    'namespace' => 'App\\Models',
    'base_class' => \Illuminate\Database\Eloquent\Model::class,
    'use_guarded' => false,           // Use $guarded instead of $fillable
    'generate_phpdoc' => true,        // Generate PHPDoc blocks
    'detect_soft_deletes' => true,    // Auto-detect SoftDeletes trait
    'detect_timestamps' => true,      // Auto-detect timestamp columns
],

Customize how database types map to Laravel migration methods:

'type_mappings' => [
    'mysql' => [
        'tinyint(1)' => 'boolean',
        'json' => 'json',
        // Add custom mappings
    ],
    'pgsql' => [
        'jsonb' => 'jsonb',
        'uuid' => 'uuid',
    ],
],
'relationships' => [
    'detect_belongs_to' => true,
    'detect_has_many' => true,
    'detect_has_one' => true,
    'detect_belongs_to_many' => true,
    'detect_morph' => true,
    'pivot_table_patterns' => [
        // Regex patterns for detecting pivot tables
        '/^([a-z]+)_([a-z]+)$/',
    ],
],
<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    public function up(): void
    {
        Schema::create('posts', function (Blueprint $table) {
            $table->id();
            $table->foreignId('user_id')->constrained()->onDelete('cascade');
            $table->string('title', 255);
            $table->text('content');
            $table->enum('status', ['draft', 'published', 'archived'])->default('draft');
            $table->json('metadata')->nullable();
            $table->timestamps();
            $table->softDeletes();
            
            $table->index('status');
            $table->fullText('content');
        });
    }

    public function down(): void
    {
        Schema::dropIfExists('posts');
    }
};
<?php

declare(strict_types=1);

namespace App\Models;

use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Relations\BelongsTo;
use Illuminate\Database\Eloquent\Relations\BelongsToMany;
use Illuminate\Database\Eloquent\Relations\HasMany;
use Illuminate\Database\Eloquent\SoftDeletes;

/**
 * @property int $id
 * @property int $user_id
 * @property string $title
 * @property string $content
 * @property string $status
 * @property array|null $metadata
 * @property \Carbon\Carbon $created_at
 * @property \Carbon\Carbon $updated_at
 * @property \Carbon\Carbon|null $deleted_at
 * 
 * @property-read User $user
 * @property-read \Illuminate\Database\Eloquent\Collection|Comment[] $comments
 * @property-read \Illuminate\Database\Eloquent\Collection|Tag[] $tags
 */
class Post extends Model
{
    use SoftDeletes;

    protected $fillable = [
        'user_id',
        'title',
        'content',
        'status',
        'metadata',
    ];

    protected $casts = [
        'metadata' => 'array',
    ];

    public function user(): BelongsTo
    {
        return $this->belongsTo(User::class);
    }

    public function comments(): HasMany
    {
        return $this->hasMany(Comment::class);
    }

    public function tags(): BelongsToMany
    {
        return $this->belongsToMany(Tag::class, 'post_tag');
    }
}

You can also use Elosql programmatically:

use Sepehr_Mohseni\Elosql\Parsers\SchemaParserFactory;
use Sepehr_Mohseni\Elosql\Generators\MigrationGenerator;
use Sepehr_Mohseni\Elosql\Generators\ModelGenerator;

// Get the parser for your database
$parser = app(SchemaParserFactory::class)->make('mysql');

// Parse all tables
$tables = $parser->getTables();

// Or parse specific tables
$tables = $parser->getTables([
    'include' => ['users', 'posts'],
    'exclude' => ['migrations'],
]);

// Generate migrations
$migrationGenerator = app(MigrationGenerator::class);
$files = $migrationGenerator->generateAll($tables, 'mysql', database_path('migrations'));

// Generate models
$modelGenerator = app(ModelGenerator::class);
foreach ($tables as $table) {
    $content = $modelGenerator->generate($table, 'mysql', $tables);
    // Write to file or process as needed
}

Elosql handles foreign key dependencies intelligently:

  1. Dependency Resolution – Tables are ordered based on their foreign key dependencies using topological sorting
  2. Separate FK Migrations – Foreign keys are generated in separate migration files that run after all tables are created
  3. Circular Dependencies – Detected and reported with suggestions for resolution

This ensures migrations can be run without foreign key constraint violations.

  • Integers: tinyint, smallint, mediumint, int, bigint
  • Floating point: float, double, decimal
  • Strings: char, varchar, text, mediumtext, longtext
  • Binary: binary, varbinary, blob
  • Date/Time: date, datetime, timestamp, time, year
  • Special: json, enum, set, boolean
  • Spatial: point, linestring, polygon, geometry
  • All standard types plus: uuid, jsonb, inet, macaddr, cidr
  • Array types
  • Range types
  • integer, real, text, blob, numeric
  • All standard types plus: uniqueidentifier, nvarchar, ntext

Run the test suite:

Run with coverage:

Run static analysis:

Fix code style:

Contributions are welcome! Please see CONTRIBUTING.md for details.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

If you discover any security-related issues, please email isepehrmohseni@gmail.com instead of using the issue tracker.

The MIT License (MIT). Please see License File for more information.

Laravel News Links

Flying Through a Computer Chip

https://theawesomer.com/photos/2026/01/flying_through_a_computer_chip_t.jpg

Flying Through a Computer Chip

Epic Spaceman takes us on a journey through a smartphone’s main processing unit by enlarging a computer chip to the size of Manhattan and flying through it with his digital avatar. It’s mind-blowing when you realize just how much computing power and engineering complexity fits inside a chip the size of a fingernail. For more, check out his collab with MKBHD.

The Awesomer

The Latest ‘Avengers: Doomsday’ Trailer Gives Us an Unlikely Team-Up

https://gizmodo.com/app/uploads/2026/01/avengers-doomsday-thing-fantastic-four-1280×853.jpg

It’s a Tuesday morning, and Avatar: Fire and Ash is still playing in theaters, so you know what that means: it’s time to sit down and watch a much nicer quality version of an Avengers: Doomsday trailer you already saw a camrip of on social media a week ago.

At least this time, whether or not you saw the leaks that have heralded every delayed online release rolling out Marvel’s Doomsday marketing plan, there is a surprisingly novel element to this latest one: it shows a new team-up for the latest entry in a superhero team-up movie. Imagine that!

The fourth Doomsday teaser jukes perhaps where many would’ve expected it to jive. After last week’s action-packed X-Men tease, things are back to a bit more of a calm, yet dire portent as we catch up with the worlds of Wakanda and Talokan after the events of Wakanda Forever. With Shuri mourning the loss of much of her family and Namor ever-vigilant for things that go bump in the ocean, it’s an intriguing figure we see when Shuri and M’Baku meet to disrupt that contemplation: none other than Ben Grimm of the Fantastic Four.

After their brief appearance arriving into the primary MCU timeline during Thunderboltspost-credit scene, this is our first proper glimpse of the Fantastic Four joining up with the rest of the MCU, which is a fun little treat. It’s especially nice considering that Namor himself has a long history in the comics with the group, so even if he’s not present for this welcoming, it’s nice to at least put these two corners of the Marvel universe in each other’s paths like this.

But it’s also an intriguing choice for one of these teasers. The past three have focused on some big familiar heavy hitters making their comebacks for Doomsdaythe “surprise” return of Chris Evans as Steve Rogers, another original Avenger (and Hollywood Chris) in Thor, and then of course the invocation of the Fox X-Men films. In contrast, this feels a bit more interestingly muted… and, of course, it continues to kick the can of whether or not one of these teasers will give us a proper look at Robert Downey Jr.’s Doctor Doom. At least we’re inching ever closer to that possibility with the arrival of a Fantastic Four member on the scene.

Time will tell if this is all we’ll be getting from Doomsday for now—early rumors about the campaign did say there would be just four teasers—or if we’ll be getting more as long as the box office is in a state of Pandora-induced mania. Either way, you’ll probably learn about it on social media first well before Marvel deigns to officially fill us all in.

Avengers: Doomsday hits theaters on December 18.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Gizmodo