MySQL Performance Tuning: From Slow Queries to Lightning-Fast Database

https://media2.dev.to/dynamic/image/width=1000,height=500,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvd5exh6bx5tq40s8ye01.png

Database performance is often the bottleneck in web applications. This guide covers comprehensive MySQL optimization techniques from query-level improvements to server configuration tuning.



Understanding Query Execution

Before optimizing, understand how MySQL executes queries using EXPLAIN:

EXPLAIN SELECT 
    o.id,
    o.total,
    u.name,
    COUNT(oi.id) as item_count
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN order_items oi ON o.id = oi.order_id
WHERE o.status = 'completed'
    AND o.created_at > '2024-01-01'
GROUP BY o.id
ORDER BY o.created_at DESC
LIMIT 20;

Key EXPLAIN columns to watch: type (aim for ref or better), rows (lower is better), Extra (avoid "Using filesort" and "Using temporary").



EXPLAIN Output Analysis

+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
| id | select_type | table | type   | possible_keys | key     | key_len | ref              | rows | Extra       |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
|  1 | SIMPLE      | o     | range  | idx_status    | idx_... | 4       | NULL             | 5000 | Using where |
|  1 | SIMPLE      | u     | eq_ref | PRIMARY       | PRIMARY | 4       | mydb.o.user_id   |    1 | NULL        |
|  1 | SIMPLE      | oi    | ref    | idx_order     | idx_... | 4       | mydb.o.id        |    3 | Using index |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+



Indexing Strategies



Composite Index Design

Design indexes based on query patterns:

-- Query pattern: Filter by status, date range, sort by date
SELECT * FROM orders 
WHERE status = 'pending' 
AND created_at > '2024-01-01'
ORDER BY created_at DESC;

-- Optimal composite index (leftmost prefix rule)
CREATE INDEX idx_orders_status_created 
ON orders(status, created_at);

-- For queries with multiple equality conditions
SELECT * FROM products
WHERE category_id = 5
AND brand_id = 10
AND is_active = 1;

-- Index with most selective column first
CREATE INDEX idx_products_brand_cat_active
ON products(brand_id, category_id, is_active);



Covering Indexes

Avoid table lookups with covering indexes:

-- Query only needs specific columns
SELECT id, name, price FROM products
WHERE category_id = 5
ORDER BY price;

-- Covering index includes all needed columns
CREATE INDEX idx_products_covering
ON products(category_id, price, id, name);

-- MySQL can satisfy query entirely from index
-- EXPLAIN shows "Using index" in Extra column



Index for JOIN Operations

-- Ensure foreign keys are indexed
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_order_items_order_id ON order_items(order_id);
CREATE INDEX idx_order_items_product_id ON order_items(product_id);

-- For complex joins, index the join columns
SELECT p.name, SUM(oi.quantity) as total_sold
FROM products p
JOIN order_items oi ON p.id = oi.product_id
JOIN orders o ON oi.order_id = o.id
WHERE o.created_at > '2024-01-01'
GROUP BY p.id
ORDER BY total_sold DESC;

-- Indexes needed:
-- orders(created_at) - for WHERE filter
-- order_items(order_id) - for JOIN
-- order_items(product_id) - for JOIN

Don’t over-index! Each index slows down INSERT/UPDATE operations. Monitor unused indexes with sys.schema_unused_indexes.



Query Optimization Techniques



Avoiding Full Table Scans

-- Bad: Function on indexed column prevents index use
SELECT * FROM users WHERE YEAR(created_at) = 2024;

-- Good: Range query uses index
SELECT * FROM users 
WHERE created_at >= '2024-01-01' 
AND created_at < '2025-01-01';

-- Bad: Leading wildcard prevents index use
SELECT * FROM products WHERE name LIKE '%phone%';

-- Good: Trailing wildcard can use index
SELECT * FROM products WHERE name LIKE 'phone%';

-- For full-text search, use FULLTEXT index
ALTER TABLE products ADD FULLTEXT INDEX ft_name (name);
SELECT * FROM products WHERE MATCH(name) AGAINST('phone');



Optimizing Subqueries

-- Bad: Correlated subquery runs for each row
SELECT * FROM products p
WHERE price > (
    SELECT AVG(price) FROM products 
    WHERE category_id = p.category_id
);

-- Good: JOIN with derived table
SELECT p.* FROM products p
JOIN (
    SELECT category_id, AVG(price) as avg_price
    FROM products
    GROUP BY category_id
) cat_avg ON p.category_id = cat_avg.category_id
WHERE p.price > cat_avg.avg_price;

-- Even better: Window function (MySQL 8.0+)
SELECT * FROM (
    SELECT *, AVG(price) OVER (PARTITION BY category_id) as avg_price
    FROM products
) t WHERE price > avg_price;



Pagination Optimization

-- Bad: OFFSET scans and discards rows
SELECT * FROM products ORDER BY id LIMIT 10 OFFSET 100000;

-- Good: Keyset pagination (cursor-based)
SELECT * FROM products 
WHERE id > 100000  -- Last seen ID
ORDER BY id 
LIMIT 10;

-- For complex sorting, use deferred join
SELECT p.* FROM products p
JOIN (
    SELECT id FROM products
    ORDER BY created_at DESC, id DESC
    LIMIT 10 OFFSET 100000
) t ON p.id = t.id;



Server Configuration Tuning



InnoDB Buffer Pool

# my.cnf - For dedicated database server with 32GB RAM

[mysqld]
# Buffer pool should be 70-80% of available RAM
innodb_buffer_pool_size = 24G
innodb_buffer_pool_instances = 24

# Log file size affects recovery time vs write performance
innodb_log_file_size = 2G
innodb_log_buffer_size = 64M

# Flush settings (1 = safest, 2 = faster)
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT

# Thread concurrency
innodb_thread_concurrency = 0
innodb_read_io_threads = 8
innodb_write_io_threads = 8



Query Cache and Memory Settings

[mysqld]
# Connection handling
max_connections = 500
thread_cache_size = 100

# Memory per connection
sort_buffer_size = 4M
join_buffer_size = 4M
read_buffer_size = 2M
read_rnd_buffer_size = 8M

# Temporary tables
tmp_table_size = 256M
max_heap_table_size = 256M

# Table cache
table_open_cache = 4000
table_definition_cache = 2000



Monitoring and Profiling



Slow Query Log

[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1



Performance Schema Queries

-- Find top 10 slowest queries
SELECT 
    DIGEST_TEXT,
    COUNT_STAR as exec_count,
    ROUND(SUM_TIMER_WAIT/1000000000000, 2) as total_time_sec,
    ROUND(AVG_TIMER_WAIT/1000000000, 2) as avg_time_ms,
    SUM_ROWS_EXAMINED,
    SUM_ROWS_SENT
FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 10;

-- Find tables with most I/O
SELECT 
    object_schema,
    object_name,
    count_read,
    count_write,
    ROUND(sum_timer_read/1000000000000, 2) as read_time_sec,
    ROUND(sum_timer_write/1000000000000, 2) as write_time_sec
FROM performance_schema.table_io_waits_summary_by_table
ORDER BY sum_timer_wait DESC
LIMIT 10;

-- Find unused indexes
SELECT * FROM sys.schema_unused_indexes;

-- Find redundant indexes
SELECT * FROM sys.schema_redundant_indexes;



Real-time Monitoring

-- Current running queries
SELECT 
    id,
    user,
    host,
    db,
    command,
    time,
    state,
    LEFT(info, 100) as query
FROM information_schema.processlist
WHERE command != 'Sleep'
ORDER BY time DESC;

-- InnoDB status
SHOW ENGINE INNODB STATUS\G

-- Buffer pool hit ratio (should be > 99%)
SELECT 
    (1 - (
        (SELECT variable_value FROM performance_schema.global_status WHERE variable_name = 'Innodb_buffer_pool_reads') /
        (SELECT variable_value FROM performance_schema.global_status WHERE variable_name = 'Innodb_buffer_pool_read_requests')
    )) * 100 as buffer_pool_hit_ratio;



Partitioning for Large Tables

-- Range partitioning by date
CREATE TABLE orders (
    id BIGINT AUTO_INCREMENT,
    user_id INT NOT NULL,
    total DECIMAL(10,2),
    status VARCHAR(20),
    created_at DATETIME NOT NULL,
    PRIMARY KEY (id, created_at),
    INDEX idx_user (user_id, created_at)
) PARTITION BY RANGE (YEAR(created_at)) (
    PARTITION p2022 VALUES LESS THAN (2023),
    PARTITION p2023 VALUES LESS THAN (2024),
    PARTITION p2024 VALUES LESS THAN (2025),
    PARTITION p_future VALUES LESS THAN MAXVALUE
);

-- Queries automatically prune partitions
SELECT * FROM orders 
WHERE created_at >= '2024-01-01' 
AND created_at < '2024-07-01';
-- Only scans p2024 partition



Connection Pooling



Application-Level Pooling

// Node.js with mysql2
const mysql = require('mysql2/promise');

const pool = mysql.createPool({
  host: 'localhost',
  user: 'app_user',
  password: 'password',
  database: 'myapp',
  waitForConnections: true,
  connectionLimit: 20,
  queueLimit: 0,
  enableKeepAlive: true,
  keepAliveInitialDelay: 10000
});

// Use pool for queries
async function getUser(id) {
  const [rows] = await pool.execute(
    'SELECT * FROM users WHERE id = ?',
    [id]
  );
  return rows[0];
}



Conclusion

MySQL performance optimization is an iterative process. Start by identifying slow queries with the slow query log, analyze them with EXPLAIN, add appropriate indexes, and monitor the results. Server configuration should be tuned based on your workload characteristics and available resources.

Key takeaways:

  • Design indexes based on actual query patterns
  • Use EXPLAIN to understand query execution
  • Avoid functions on indexed columns in WHERE clauses
  • Configure InnoDB buffer pool appropriately
  • Monitor continuously with Performance Schema

Laravel News Links

Introducing MySQL Studio – Reducing the Barriers to Data Innovation

MySQL Studio in Oracle Cloud Infrastructure MySQL Studio in Oracle Cloud Infrastructure (OCI) is a unified environment for working with MySQL and HeatWave features through a single, streamlined interface. It brings SQL authoring, AI-assisted chat, and Jupyter-compatible notebooks together with project-based organization to help teams get from database setup to productive analytics faster. The same […]Planet MySQL

The Latest ‘Avengers: Doomsday’ Trailer Gives Us an Unlikely Team-Up

https://gizmodo.com/app/uploads/2026/01/avengers-doomsday-thing-fantastic-four-1280×853.jpg

It’s a Tuesday morning, and Avatar: Fire and Ash is still playing in theaters, so you know what that means: it’s time to sit down and watch a much nicer quality version of an Avengers: Doomsday trailer you already saw a camrip of on social media a week ago.

At least this time, whether or not you saw the leaks that have heralded every delayed online release rolling out Marvel’s Doomsday marketing plan, there is a surprisingly novel element to this latest one: it shows a new team-up for the latest entry in a superhero team-up movie. Imagine that!

The fourth Doomsday teaser jukes perhaps where many would’ve expected it to jive. After last week’s action-packed X-Men tease, things are back to a bit more of a calm, yet dire portent as we catch up with the worlds of Wakanda and Talokan after the events of Wakanda Forever. With Shuri mourning the loss of much of her family and Namor ever-vigilant for things that go bump in the ocean, it’s an intriguing figure we see when Shuri and M’Baku meet to disrupt that contemplation: none other than Ben Grimm of the Fantastic Four.

After their brief appearance arriving into the primary MCU timeline during Thunderboltspost-credit scene, this is our first proper glimpse of the Fantastic Four joining up with the rest of the MCU, which is a fun little treat. It’s especially nice considering that Namor himself has a long history in the comics with the group, so even if he’s not present for this welcoming, it’s nice to at least put these two corners of the Marvel universe in each other’s paths like this.

But it’s also an intriguing choice for one of these teasers. The past three have focused on some big familiar heavy hitters making their comebacks for Doomsdaythe “surprise” return of Chris Evans as Steve Rogers, another original Avenger (and Hollywood Chris) in Thor, and then of course the invocation of the Fox X-Men films. In contrast, this feels a bit more interestingly muted… and, of course, it continues to kick the can of whether or not one of these teasers will give us a proper look at Robert Downey Jr.’s Doctor Doom. At least we’re inching ever closer to that possibility with the arrival of a Fantastic Four member on the scene.

Time will tell if this is all we’ll be getting from Doomsday for now—early rumors about the campaign did say there would be just four teasers—or if we’ll be getting more as long as the box office is in a state of Pandora-induced mania. Either way, you’ll probably learn about it on social media first well before Marvel deigns to officially fill us all in.

Avengers: Doomsday hits theaters on December 18.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Gizmodo

Flying Through a Computer Chip

https://theawesomer.com/photos/2026/01/flying_through_a_computer_chip_t.jpg

Flying Through a Computer Chip

Epic Spaceman takes us on a journey through a smartphone’s main processing unit by enlarging a computer chip to the size of Manhattan and flying through it with his digital avatar. It’s mind-blowing when you realize just how much computing power and engineering complexity fits inside a chip the size of a fingernail. For more, check out his collab with MKBHD.

The Awesomer

EloSQL – Automatically Generate Migrations and Eloquent Models based on your SQL Database Schema

https://opengraph.githubassets.com/744ff4f3b9a5010fa8a9d56714cd15bf3cacdd32eaedb67499cb7811b54762c8/sepehr-mohseni/elosql

Tests
Latest Stable Version
Total Downloads
License
PHP Version

Elosql is a production-grade Laravel package that intelligently analyzes existing database schemas and generates precise migrations and Eloquent models. It supports MySQL, PostgreSQL, SQLite, and SQL Server, making it perfect for legacy database integration, reverse engineering, and rapid application scaffolding.

  • 🔍 Smart Schema Analysis – Automatically detects columns, indexes, foreign keys, and table relationships
  • 🚀 Multi-Database Support – Works with MySQL/MariaDB, PostgreSQL, SQLite, and SQL Server
  • 📁 Migration Generation – Creates Laravel migrations with proper dependency ordering
  • 🏗️ Model Scaffolding – Generates Eloquent models with relationships, casts, and fillable attributes
  • 🔗 Relationship Detection – Automatically detects belongsTo, hasMany, hasOne, belongsToMany, and polymorphic relationships
  • 📊 Schema Diff – Compare database schema with existing migrations
  • ⚙️ Highly Configurable – Customize every aspect of generation through config or command options
  • Production Ready – Comprehensive test suite with 90%+ coverage
  • PHP 8.1 or higher
  • Laravel 10.0 or 11.0

Install via Composer:

composer require sepehr-mohseni/elosql

The package will auto-register its service provider. Optionally, publish the configuration file:

php artisan vendor:publish --tag=elosql-config

Generate migrations and models for your entire database:

php artisan elosql:schema

See what will be generated without creating any files:

php artisan elosql:preview
php artisan elosql:migrations
php artisan elosql:models

The main command that generates both migrations and models.

php artisan elosql:schema [options]

Options:
  --connection=       Database connection to use (default: default connection)
  --table=            Generate for specific table(s), comma-separated
  --exclude=          Exclude specific table(s), comma-separated
  --migrations-path=  Custom path for migrations (default: database/migrations)
  --models-path=      Custom path for models (default: app/Models)
  --models-namespace= Custom namespace for models (default: App\Models)
  --no-migrations     Skip migration generation
  --no-models         Skip model generation
  --force             Overwrite existing files

Examples:

# Generate for specific tables
php artisan elosql:schema --table=users,posts,comments

# Exclude certain tables
php artisan elosql:schema --exclude=migrations,cache,sessions

# Custom output paths
php artisan elosql:schema --migrations-path=database/generated --models-path=app/Domain/Models

# Use a different database connection
php artisan elosql:schema --connection=legacy_db

Generate migration files from database schema.

php artisan elosql:migrations [options]

Options:
  --connection=   Database connection to use
  --table=        Generate for specific table(s)
  --exclude=      Exclude specific table(s)
  --path=         Custom output path
  --fresh         Generate fresh migrations (ignore existing)
  --diff          Only generate migrations for schema differences
  --force         Overwrite existing files

Examples:

# Generate migrations for a legacy database
php artisan elosql:migrations --connection=legacy --path=database/legacy-migrations

# Generate only new/changed tables
php artisan elosql:migrations --diff

Generate Eloquent model files.

php artisan elosql:models [options]

Options:
  --connection=   Database connection to use
  --table=        Generate for specific table(s)
  --exclude=      Exclude specific table(s)
  --path=         Custom output path
  --namespace=    Custom namespace
  --preview       Preview generated code without writing files
  --force         Overwrite existing files

Examples:

# Preview model generation
php artisan elosql:models --preview --table=users

# Generate with custom namespace
php artisan elosql:models --namespace="Domain\\User\\Models"

Preview the schema analysis without generating any files.

php artisan elosql:preview [options]

Options:
  --connection=   Database connection to use
  --table=        Preview specific table(s)
  --format=       Output format: table, json, yaml (default: table)

Examples:

# JSON output for processing
php artisan elosql:preview --format=json > schema.json

# View specific table structure
php artisan elosql:preview --table=users

Show differences between database schema and existing migrations.

php artisan elosql:diff [options]

Options:
  --connection=   Database connection to use
  --format=       Output format: table, json (default: table)

After publishing the config file (config/elosql.php), you can customize:

'connection' => env('ELOSQL_CONNECTION', null), // null = default connection
'exclude_tables' => [
    'migrations',
    'failed_jobs',
    'password_resets',
    'personal_access_tokens',
    'cache',
    'sessions',
],
'migrations' => [
    'path' => database_path('migrations'),
    'separate_foreign_keys' => true, // Generate FK migrations separately
    'include_drop_tables' => true,   // Include down() method
],
'models' => [
    'path' => app_path('Models'),
    'namespace' => 'App\\Models',
    'base_class' => \Illuminate\Database\Eloquent\Model::class,
    'use_guarded' => false,           // Use $guarded instead of $fillable
    'generate_phpdoc' => true,        // Generate PHPDoc blocks
    'detect_soft_deletes' => true,    // Auto-detect SoftDeletes trait
    'detect_timestamps' => true,      // Auto-detect timestamp columns
],

Customize how database types map to Laravel migration methods:

'type_mappings' => [
    'mysql' => [
        'tinyint(1)' => 'boolean',
        'json' => 'json',
        // Add custom mappings
    ],
    'pgsql' => [
        'jsonb' => 'jsonb',
        'uuid' => 'uuid',
    ],
],
'relationships' => [
    'detect_belongs_to' => true,
    'detect_has_many' => true,
    'detect_has_one' => true,
    'detect_belongs_to_many' => true,
    'detect_morph' => true,
    'pivot_table_patterns' => [
        // Regex patterns for detecting pivot tables
        '/^([a-z]+)_([a-z]+)$/',
    ],
],
<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    public function up(): void
    {
        Schema::create('posts', function (Blueprint $table) {
            $table->id();
            $table->foreignId('user_id')->constrained()->onDelete('cascade');
            $table->string('title', 255);
            $table->text('content');
            $table->enum('status', ['draft', 'published', 'archived'])->default('draft');
            $table->json('metadata')->nullable();
            $table->timestamps();
            $table->softDeletes();
            
            $table->index('status');
            $table->fullText('content');
        });
    }

    public function down(): void
    {
        Schema::dropIfExists('posts');
    }
};
<?php

declare(strict_types=1);

namespace App\Models;

use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Relations\BelongsTo;
use Illuminate\Database\Eloquent\Relations\BelongsToMany;
use Illuminate\Database\Eloquent\Relations\HasMany;
use Illuminate\Database\Eloquent\SoftDeletes;

/**
 * @property int $id
 * @property int $user_id
 * @property string $title
 * @property string $content
 * @property string $status
 * @property array|null $metadata
 * @property \Carbon\Carbon $created_at
 * @property \Carbon\Carbon $updated_at
 * @property \Carbon\Carbon|null $deleted_at
 * 
 * @property-read User $user
 * @property-read \Illuminate\Database\Eloquent\Collection|Comment[] $comments
 * @property-read \Illuminate\Database\Eloquent\Collection|Tag[] $tags
 */
class Post extends Model
{
    use SoftDeletes;

    protected $fillable = [
        'user_id',
        'title',
        'content',
        'status',
        'metadata',
    ];

    protected $casts = [
        'metadata' => 'array',
    ];

    public function user(): BelongsTo
    {
        return $this->belongsTo(User::class);
    }

    public function comments(): HasMany
    {
        return $this->hasMany(Comment::class);
    }

    public function tags(): BelongsToMany
    {
        return $this->belongsToMany(Tag::class, 'post_tag');
    }
}

You can also use Elosql programmatically:

use Sepehr_Mohseni\Elosql\Parsers\SchemaParserFactory;
use Sepehr_Mohseni\Elosql\Generators\MigrationGenerator;
use Sepehr_Mohseni\Elosql\Generators\ModelGenerator;

// Get the parser for your database
$parser = app(SchemaParserFactory::class)->make('mysql');

// Parse all tables
$tables = $parser->getTables();

// Or parse specific tables
$tables = $parser->getTables([
    'include' => ['users', 'posts'],
    'exclude' => ['migrations'],
]);

// Generate migrations
$migrationGenerator = app(MigrationGenerator::class);
$files = $migrationGenerator->generateAll($tables, 'mysql', database_path('migrations'));

// Generate models
$modelGenerator = app(ModelGenerator::class);
foreach ($tables as $table) {
    $content = $modelGenerator->generate($table, 'mysql', $tables);
    // Write to file or process as needed
}

Elosql handles foreign key dependencies intelligently:

  1. Dependency Resolution – Tables are ordered based on their foreign key dependencies using topological sorting
  2. Separate FK Migrations – Foreign keys are generated in separate migration files that run after all tables are created
  3. Circular Dependencies – Detected and reported with suggestions for resolution

This ensures migrations can be run without foreign key constraint violations.

  • Integers: tinyint, smallint, mediumint, int, bigint
  • Floating point: float, double, decimal
  • Strings: char, varchar, text, mediumtext, longtext
  • Binary: binary, varbinary, blob
  • Date/Time: date, datetime, timestamp, time, year
  • Special: json, enum, set, boolean
  • Spatial: point, linestring, polygon, geometry
  • All standard types plus: uuid, jsonb, inet, macaddr, cidr
  • Array types
  • Range types
  • integer, real, text, blob, numeric
  • All standard types plus: uniqueidentifier, nvarchar, ntext

Run the test suite:

Run with coverage:

Run static analysis:

Fix code style:

Contributions are welcome! Please see CONTRIBUTING.md for details.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

If you discover any security-related issues, please email isepehrmohseni@gmail.com instead of using the issue tracker.

The MIT License (MIT). Please see License File for more information.

Laravel News Links

1000 Ants vs. an Obstacle Course

https://theawesomer.com/photos/2026/01/ants_vs_obstacles_t.jpg

1000 Ants vs. an Obstacle Course

After seeing videos of how ant colonies work together, TerraGreen thought it would be interesting to see how they behave up close. So he gathered up roughly 1000 ants from an ant nest and put them to the test in a series of collaborative challenges and obstacle courses. Antony is quite the overachiever.

The Awesomer

America Is Falling Out of Love With Pizza

The restaurant industry is trying to figure out whether America has hit peak pizza. From a report: Once the second-most common U.S. restaurant type, pizzerias are now outnumbered by coffee shops and Mexican food eateries, according to industry data. Sales growth at pizza restaurants has lagged behind the broader fast-food market for years, and the outlook ahead isn’t much brighter. "Pizza is disrupted right now," Ravi Thanawala, chief financial officer and North America president at Papa John’s International, said in an interview. "That’s what the consumer tells us." The parent of the Pieology Pizzeria chain filed for chapter 11 bankruptcy protection in December. Others, including the parent of Anthony’s Coal Fired Pizza & Wings and Bertucci’s Brick Oven Pizza & Pasta, earlier filed for bankruptcy. Pizza once was a novelty outside big U.S. cities, providing room for growth for independent shops and then chains such as Pizza Hut with its red roof dine-in restaurants. Purpose-made cardboard boxes and fleets of delivery drivers helped make pizza a takeout staple for those seeking low-stress meals. Today, pizza shops are engaged in price wars with one another and other kinds of fast food. Food-delivery apps have put a wider range of cuisines and options at Americans’ fingertips. And $20 a pie for a family can feel expensive compared with $5 fast-food deals, frozen pizzas or eating a home-cooked meal. […] Pizza’s dominance in American restaurant fare is declining, however. Among different cuisines, it ranked sixth in terms of U.S. sales in 2024 among restaurant chains, down from second place during the 1990s, Technomic said. The number of pizza restaurants in the U.S. hit a record high in 2019 and has declined since then, figures from the market-research firm Datassential show. Further reading, at WSJ: The Feds Need to Bail Out the Pizza Industry.


Read more of this story at Slashdot.

Slashdot