https://media.notthebee.com/articles/697b85b916795697b85b916796.jpg
Rogan’s gonna Rogan!
Not the Bee
Just another WordPress site
https://media.notthebee.com/articles/697b85b916795697b85b916796.jpg
Rogan’s gonna Rogan!
Not the Bee
https://gizmodo.com/app/uploads/2026/01/computer-history-museum-1280×853.jpg
The Computer History Museum, based in Mountain View, California, looks like a fine way to spend an afternoon for anyone interested in, well, the history of computers. And if that description fits you but youāre not in California, then rejoice, because CHM recently launched OpenCHM, an excellent online portal designed to allow exploration of the museum from afar.
You can, of course, just click around to see what catches your eye, but if that feels too unfocused, you can also go straight to the collection highlights. As you might expect, these include a solid selection of early computers and microcomputers, along with photos, records, and other objects of historic import. Several objects predate the information age, including a Jacquard loom and a copy of The Adams Cable Codex, a fascinating 1894 book that catalogs hundreds of code words that were used to save space when sending messages via cable. Happily, thereās a full scan of the same book at the Internet Archive, because the CHMās documentation on the latter is rather minimal.

This is the case throughout the site. In fairness, OpenCHM is still in beta, and hopefully the item descriptions will be fleshed out as the site developsābut as it stands, their terse nature means that some of the objects on show are disappointingly inscrutable. For example, it took a bit of googling to work out what on earth a klystron is, and the CHMās description isnāt much help, noting only that āThis item is mounted on a wooden base.ā (For the record, a klystron is a vacuum tube amplifier that looks cool as hell.)
Still, such quibbles aside, thereās a wealth of material to explore here, and on the whole, Open CHM makes doing so both easy and enjoyable. It provides multiple entry points to the collection. In addition to the aforementioned highlights page and a series of curated collections, thereās something called the āDiscovery Wallā. This is described as āa dynamic showcase of artifacts chosen by online visitorsā, and itās certainly interesting to see what catches peopleās attention. At the time of our virtual visit, items on display on the Discovery Wall included an alarmingly yellow Atari t-shirt from 1977, a Tamagotchi (in its original packaging!), a placard from the 2023 Writersā Guild strike (āDonāt let bots write your shows!ā) and a Microsoft PS/2 mouse, the mere sight of which is likely to cause shudders in anyone with memories of flipping one of these over to pull out the trackball and clean monthsā worth of accumulated crud from the two little rollers inside.

Perhaps the single most poignant item we came across, however, is a copy of Ted Nelsonās self-published 1974 opus Computer Lib/Dream Machines, which promoted computer literacy and the liberation Nelson hoped it would bring. The document is strikingly forward-thinkingāamongst other things, it predicted hypertext, of which Nelson was an early proponentābut the technoutopianism on display seems both charmingly innocent and painfully naĆÆve today. āNew Freedoms Through Computer Screensā, promises the rear cover. If only they knew.
Gizmodo
https://www.louderwithcrowder.com/media-library/image.jpg?id=63361689&width=980
Watch Louder with Crowder every weekday at 11:00 AM Eastern, only on Rumble Premium!
1. ICE is not law enforcement.
ICE stands for Immigration & Customs Enforcement.
ICE was established in 2003 as part of the Homeland Security Act of 2002. This act also established the Department of Homeland Security.
Section 441 of the Homeland Security Act transfers immigration enforcement functions to the Under Secretary for Border and Transportation Security. This included Border Patrol, INS, detention and removal amongst others.
Section 442 of the Homeland Security Act establishes a Bureau of Border Security, headed by an assistant secretary to the Under Secretary.
These, amongst other provisions, allowed the Department of Homeland Security to form the Bureau of Immigration and Customs Enforcement (ICE).
Congress expanded ICEās authorities the Intelligence Reform and Terrorism Prevention Act of 2004, the Trafficking Victims Protection Act Reauthorizations Acts of 2003 and 2005 and the Immigration and Nationality Act, amongst others.
Not only is ICE a law enforcement agency, its jurisdictional authority and enforcement role have been constantly expanding since its creation.
ICE has the authority to arrest and detain illegal aliens under US Code Title 8 Chapter 12 Subchapter II Part IV Section 1226 and US Code Title 8 Chapter 12 Subchapter II Part IX Section 1357. They also have broader enforcement authority as sworn agents under US Code Title 18, which includes an entire chapter that defines obstruction of justice. This allows for them to arrest people, including citizens, who are impeding ICE actions and committing obstruction of justice.
Not only is ICE an arm of federal law enforcement, they are specifically tasked to enforce immigration and customs laws.
2. Immigrants commit fewer crimes.
The study that claims that immigrants commit fewer crimes was conducted by Northwestern University titled Law-Abiding Immigrants: The Incarceration Gap Between Immigrants and the U.S.-Born, 1870-2020.
The study, as would seem obvious, takes the data of immigrants and U.S. born citizens who are incarcerated in the American prison system.
Problem #1: The study ends in 2020.
While illegal immigration has been an increasing problem, there has been a significant increase of the undocumented crossing into America, especially during Joe Bidenās administration. In 1969, illegal immigrants made up 0.3% of the population. In 2020, Customs and Border Patrol reported 646,822 enforcement actions. In 2021, they reported 1,956,519. In 2022, they recorded 2,766,582 actions. In 2023, they recorded 3,201,144 actions. In 2024, they recorded 2,901,142 actions.
We also have arrest statistics from CBP.
Every metric from the Department of Homeland Security shows significant demographic changes to the illegal immigrant population during the Biden administration. After 2020. Consider, as well, that the defund the police movement coincided with the migrant surge under Joe Biden, this all leads to a recipe for disaster.
Problem #2: The study doesnāt differentiate between illegal and legal immigrants.
Considering the restrictions and caveats to legal immigration, it would stand to reason that we arenāt allowing the criminal element to immigrate here. For example, one of the requirements to eligibility is to be a person of good moral character. There is an expectation in the vetting process for a legal immigrant that they are not the kind of person who would commit a crime.
Problem #3: Illegal immigrant crime calculations leave out crimes related to fraudulent social security numbers, fake driverās licenses, fraudulent green cards and improperly accessing public benefits. The State Criminal Alien Assistance Program is a Bureau of Justice Assistance program that provides federal payments to states and localities that incurred correction officer salary costs for incarcerating undocumented criminal aliens. Yes, the federal government is subsidizing the incarceration of illegal aliens. SCAAP has far different numbers on illegal immigration. SCAAP’s data shows that illegals actually ARE committing more crimes. The Federation for American Immigration Reform found that illegals are twice as likely to be in prison in California and New York, four times as likely in New Jersey and almost five times more likely in Arizona.
Problem #4: A crime can only be counted if itās reported. Illegal immigrants are less likely to report crimes and appear in court as witnesses because of the fear of deportation. As recent ICE arrests have found and history tells us, immigrants form ethnic enclaves, which means if crimes are being committed by illegal immigrants in illegal immigrant enclaves, we can assume that some of them, perhaps much larger than the general population, are not going reported.
Even factcheck.org admits there arenāt nationwide statistics on all crimes committed by illegal immigrants, only estimates extracted from smaller samples.
Then again, every person who has entered the country illegally has committed a crime, making the illegal immigrant crime rate 100%. Which leads us to:
3. Crossing the border is not a crime, and no human is illegal.
You’ve heard it before: "Entering the United States is not a crime; itās just a misdemeanor."
Okay, so, misdemeanors ARE crimes. The word āmisdemeanorā is a designation that refers to the seriousness of the offense. You have misdemeanors, and you have felonies. Felonies typically carry bigger punishments but are also crimes.
The designation of improper (or illegal) entry into the US is designated in US Code Title 8 Chapter 12 Subchapter II Part VII Section 1325. The misdemeanor carries fines and prison time. Marriage fraud and entrepreneur fraud carry heftier penalties. But that’s just for the first offense.
If you have been removed and reenter, things get worse. And, depending on why you were ordered removed, the penalties can be even worse than that.
Who can be removed? Anyone who came here by illegal means, including people who have violated conditions of entry. Unlawful voters, traffickers, drug abusersā¦there are a lot of offenses that are deportable. Please peruse at your leisure.
As for no human being illegal… Humans can be criminals. Again. That’s how crime works. If you are committing a crime, you are subject to legal action. The word "alien" as a legal term for foreign nationals appeared in the Naturalization Act of 1790 and the Alien and Sedition Acts of 1798. The word "illegal" added on simply becomes the descriptor that it is an foreign national in the country illegally. "Illegal alien" can be found as far back as 1924, the same year the United States Border Patrol was established. The Supreme Court used the in a 1976 case United States v. Martinze-Fuerte. Bill Clinton used the term in his 1995 State of the Union address. As the term "alien" is still used in federal statutes and regulations, the term "illegal alien" is still appropriate when referring to people who have entered and/or are in the United States illegally.
Bottom line: The United States of America is a country with laws and a border. It is illegal to cross the border in any way that the United States does not define as lawful. If it is not lawful, it is a crime. Anyone who has come to the United States of America in a way that does not follow US law has committed a crime. Thatās how crime works. I donāt know why I have to explain that.
Louder With Crowder
https://blog.laragent.ai/content/images/size/w1200/2026/01/ChatGPT-Image-Jan-22–2026–03_15_35-PM.png
This major release takes LarAgent to the next level – focused on structured responses, reliable context management, richer tooling, and production-grade agent behavior.
Designed for both development teams and business applications where predictability, observability, and scalability matter
LarAgent introduces DataModel-based structured responses, moving beyond arrays to typed, predictable output shapes you can rely on in real apps.
Example
use LarAgent\Core\Abstractions\DataModel;
use LarAgent\Attributes\Desc;
class WeatherResponse extends DataModel
{
#[Desc('Temperature in Celsius')]
public float $temperature;
#[Desc('Condition (sunny/cloudy/etc.)')]
public string $condition;
}
class WeatherAgent extends Agent
{
protected $responseSchema = WeatherResponse::class;
}
$response = WeatherAgent::ask('Weather in Tbilisi?');
echo $response->temperature;
v1.0 introduces a pluggable storage layer for chat history and context, enabling persistent, switchable, and scalable storage drivers.
class MyAgent extends Agent
{
protected $history = [
CacheStorage::class, // Primary: read first, write first
FileStorage::class, // Fallback: used if primary fails on read
];
}
Long chats are inevitable, but hitting token limits shouldnāt be catastrophic. LarAgent now provides smart context management strategies.
class MyAgent extends Agent
{
protected $enableTruncation = true;
protected $truncationThreshold = 50000;
}
š Save on token costs while preserving context most relevant to the current conversation.
Context now supports identity-based sessions which is created by user id, chat name, agent name and group. Identity storage holds all identity keys that makes context of any agent available via the Context facade to manage. For example:
Context::of(MyAgent::class)
->forUser($userId)
->clearAllChats();
ā Better support for multi-tenant SaaS, shared agents, and enterprise apps.
Generate fully-formed custom tool classes with boilerplate and IDE-friendly structure
php artisan make:agent:tool WeatherTool
This generates a ready tool with name, description, and handle() stub. Ideal for quickly adding capabilities to your agents.
Now the CLI chat shows tool calls as they happen ā invaluable when debugging agent behavior.
You: Find me Laravel queue docs
Tool call: web_search
Tool call: extract_content
Agent: Hereās the documentationā¦
š Easier debugging and More transparency into what your agent actually does
MCP (Model Context Protocol) tools now support automatic caching.
Add to .env:
MCP_TOOL_CACHE_ENABLED=true
MCP_TOOL_CACHE_TTL=3600
MCP_TOOL_CACHE_STORE=redis
Clear with:
php artisan agent:tool-clear
ā Great for production systems where latency matters
Track prompt tokens, completion tokens, and usage stats per agent ā ideal for cost analysis and billing
$agent = MyAgent::for('user-123');
$usage = $agent->usageStorage();
$totalTokens = $usage->getTotalTokens();
Usage tracking is based on session identity – it means that you can check token usage by user, by agent and/or by chat – allowing you to implement comprehensive statistics and reporting capabilities.
v1.0 includes a few breaking API changes. Make sure to check the migration guide.
Production-focused improvements:
LarAgent v1.0 is all about reliability, predictability, and scale ā turning AI agents into first-class citizens of your Laravel application.
Happy coding! š
Laravel News Links
Laravel News Links
https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/laravel-debugbar-v4.png
Release Date: January 23, 2025
Package Version: v4.0.0
Summary
Laravel Debugbar v4.0.0 marks a major release with package ownership transferring from barryvdh/laravel-debugbar to fruitcake/laravel-debugbar. This version brings php-debugbar 3.x support and includes several new collectors and improvements for modern Laravel applications.
This release adds a new collector that tracks HTTP client requests made through Laravel’s HTTP client. The collector provides visibility into outbound API calls, making it easier to debug external service integrations and monitor response times.
For applications using Inertia.js, the new Inertia collector tracks shared data and props passed to Inertia components. This helps debug data flow in Inertia-powered applications.
The debugbar now includes improved component detection for Livewire versions 2, 3, and 4. This provides better visibility into Livewire component lifecycle events and data updates across all currently supported Livewire versions.
This version includes better handling for Laravel Octane and other long-running server processes. The debugbar now properly manages state across requests in persistent application environments.
The cache widget now displays estimated byte usage, giving developers better insight into cache memory consumption during request processing.
This version has many UI improvements and settings like debugbar position, auto-hiding empty collectors, themes (Dark, Light, Auto), and more:

The package has moved from barryvdh/laravel-debugbar to fruitcake/laravel-debugbar, requiring manual removal and reinstallation:
composer remove barryvdh/laravel-debugbar --dev --no-scripts
composer require fruitcake/laravel-debugbar --dev --with-dependencies
The namespace has changed from the original structure to Fruitcake\LaravelDebugbar. You’ll need to update any direct references to debugbar classes in your codebase.
Several features have been removed in this major version:
Default configuration values have been updated, and deprecated configuration options have been removed. Review your config/debugbar.php file and compare it with the published configuration from the new package.
This is not a standard upgrade. You must manually remove the old package and install the new one using the commands shown above. After installation, update any namespace references in your code from the old barryvdh namespace to Fruitcake\LaravelDebugbar.
Review your configuration file for deprecated options and compare with the new defaults. The package maintains compatibility with Laravel 9.x through 12.x. See the upgrade docs for details on upgrading from 3.x to 4.x.
Laravel News
https://techversedaily.com/storage/posts/MzQOunegn6DppLlmQlHsC7Mp4l52cV55BAhrUwPY.png
Your Laravel application felt fast during development.
Pages loaded instantly. Queries returned results in milliseconds. Everything seemed under control.
Then you deployed to production.
Traffic increased. Data grew. Users started complaining: āThe site feels slow.ā
This is a classic Laravel problem ā and no, itās usually not caused by PHP or Blade templates.
š The real bottleneck is almost always the database.
In production, inefficient queries donāt just slow down a page ā they compound under load, drain server resources, and quietly kill performance.
In this guide, youāll learn how to systematically optimize Laravel databases in production using three essential tools:
Database Indexes
EXPLAIN (Query Execution Plans)
MySQL Slow Query Log
Used together, these tools turn guessing into measurable optimization.
A few uncomfortable truths:
A 1-second delay can reduce conversions by 7%
Full table scans grow exponentially with data
What works with 10,000 rows fails miserably at 1 million
Laravel doesnāt automatically fix bad queries
Your database doesnāt care how elegant your Eloquent code looks ā it only cares how much work it has to do.
Optimization is about reducing work.
Without an index, MySQL must scan every row to find matching data.
Think of it like this:
Indexes turn O(n) scans into O(log n) lookups.
Create indexes on columns that are:
Avoid indexing:
$orders = Order::where('user_id', $userId)->get();
If orders.user_id is not indexed, MySQL scans the entire table.
Schema::table('orders', function (Blueprint $table) {
$table->index('user_id');
});
Now MySQL can jump straight to relevant rows.
Real queries rarely filter on just one column.
$orders = Order::where('user_id', $userId)
->where('status', 'paid')
->orderBy('created_at', 'desc')
->get();
Schema::table('orders', function (Blueprint $table) {
$table->index(['user_id', 'status', 'created_at']);
});
ā ļø Index order matters
MySQL can use (user_id, status)
It cannot efficiently use (status, created_at) alone
Always index columns in the same order your queries filter them.
ā Indexing every column
ā Guessing instead of measuring
ā Ignoring write performance
ā Indexing low-cardinality fields alone
Indexes speed up reads but slow down writes. Balance is key.
Writing a query doesnāt mean MySQL executes it the way you expect.
EXPLAIN shows the truth.
$plan = DB::select(
'EXPLAIN SELECT * FROM orders WHERE user_id = ?',
[$userId]
);
dd($plan);
EXPLAIN SELECT * FROM orders WHERE user_id = 10;
type (Scan Method)keyThe index actually used
NULL = no index used ā
rowsEstimated rows scanned
Smaller is always better
ExtraUsing filesort ā Slow sorting
Using temporary ā Temp table created
Using index ā Index-only query (excellent)
EXPLAIN SELECT * FROM products WHERE category_id = 5;
Result:
type = ALL
key = NULL
rows = 600000
šØ MySQL scanned the entire table.
CREATE INDEX idx_category_id ON products (category_id);
EXPLAIN SELECT * FROM products WHERE category_id = 5;
Result:
type = ref
key = idx_category_id
rows = 120
ā Massive improvement with zero code changes.
Some performance issues only appear in production.
Thatās where Slow Query Log shines.
It records queries that exceed a time threshold.
Think of it as a black box recorder for your database.
SET GLOBAL slow_query_log = 1;
SET GLOBAL long_query_time = 1;
SET GLOBAL log_queries_not_using_indexes = 1;
Queries taking longer than 1 second will be logged.
Edit MySQL config:
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
log_queries_not_using_indexes = 1
Restart MySQL:
sudo systemctl restart mysql
Query_time: 2.94
Rows_examined: 184732
SELECT * FROM orders
WHERE user_id = 123
ORDER BY created_at DESC;
This query scanned 184,732 rows to return a few records.
Thatās your optimization target.
mysqldumpslow -s t -t 10 /var/log/mysql/mysql-slow.log
pt-query-digest /var/log/mysql/mysql-slow.log
This gives:
Query frequency
Total execution time
Average latency
Rows examined
composer require laravel/telescope
php artisan telescope:install
php artisan migrate
View query execution time directly in the dashboard.
composer require barryvdh/laravel-debugbar --dev
Never use Debugbar in production.
Enable slow query log
Identify worst queries
Run EXPLAIN
Add or adjust indexes
Refactor queries if needed
Measure before & after
Deploy and monitor
Optimization without measurement is guesswork.
Product::where('name', 'LIKE', '%laptop%')
->where('is_active', 1)
->orderBy('created_at', 'desc')
->paginate(20);
EXPLAIN showed:
Full table scan
Filesort
800k rows scanned
Schema::table('products', function (Blueprint $table) {
$table->fullText('name');
});
Product::whereFullText('name', 'laptop')
->where('is_active', 1)
->paginate(20);
Schema::table('products', function (Blueprint $table) {
$table->index(['is_active', 'created_at']);
});
| Metric | Before | After |
| ------------ | ------- | ------- |
| Rows scanned | 850,000 | 220 |
| Query time | 3.1s | 0.04s |
| CPU usage | High | Minimal |
Same app. Same data.
Just smarter database usage.
Laravel News Links
https://s3files.core77.com/blog/images/1790938_81_140409_0_G8pt0a3.jpg
These days, companies like Stihl and Makita sell multi-heads. These are battery-powered motors that can drive a variety of common landscaping attachments, like string trimmers and hedge cutters.
Uniquely, Makita also offers this Snow Thrower attachment:
The business end is 12" wide and can handle a 6" depth of snow at a time. Tiltable vanes on the inside let you control whether you want to throw the snow to the left, to the right or straight ahead. The company says you can clear about five parking spaces with two 18V batteries.

So how well does it work? Seeing is believing. Here’s Murray Kruger of Kruger Construction putting it through its paces:
Core77
https://opengraph.githubassets.com/31857b29a46a84e2c6bbc712f5ec663b85c7a1b32aa7df4cc5c9e371828cb100/devrabiul/laravel-toaster-magic/releases/tag/v2.0
Laravel Toaster Magic is designed to be the only toaster package you’ll need for any type of Laravel project.
Whether you are building a corporate dashboard, a modern SaaS, a gaming platform, or a simple blog, I have crafted a theme that fits perfectly.
"One Package, Many Themes." ā No need to switch libraries just to change the look.
This major release brings 7 stunning new themes, full Livewire v3/v4 support, and modern UI enhancements.
I have completely redesigned the visual experience. You can now switch between 7 distinct themes by simply updating your config.
| Theme | Config Value | Description |
|---|---|---|
| Default | 'default' |
Clean, professional, and perfect for corporate apps. |
| Material | 'material' |
Google Material Design inspired. Flat and bold. |
| iOS | 'ios' |
(Fan Favorite) Apple-style notifications with backdrop blur and smooth bounce animations. |
| Glassmorphism | 'glassmorphism' |
Trendy frosted glass effect with vibrant borders and semi-transparent backgrounds. |
| Neon | 'neon' |
(Dark Mode Best) Cyberpunk-inspired with glowing neon borders and dark gradients. |
| Minimal | 'minimal' |
Ultra-clean, distraction-free design with simple left-border accents. |
| Neumorphism | 'neumorphism' |
Soft UI design with 3D embossed/debossed plastic-like shadows. |
š How to use:
// config/laravel-toaster-magic.php 'theme' => 'neon',
I’ve rewritten the Javascript core to support Livewire v3 & v4 natively.
Livewire.on (v3) or standard event dispatching.wire:navigate.// Dispatch from component $this->dispatch('toastMagic', status: 'success', message: 'User Saved!', title: 'Great Job' );
Want your toasts to pop without changing the entire theme? Enable Gradient Mode to add a subtle "glow-from-within" gradient based on the toast type (Success, Error, etc.).
// config/laravel-toaster-magic.php 'gradient_enable' => true
Works best with Default, Material, Neon, and Glassmorphism themes.
Don’t want themes? Just want solid colors? Color Mode forces the background of the toast to match its type (Green for Success, Red for Error, etc.), overriding theme backgrounds for high-visibility alerts.
// config/laravel-toaster-magic.php 'color_mode' => true
I have completely modularized the CSS.
.theme-neon, .theme-ios) to prevent conflicts.body[theme="dark"].Upgrading from v1.x to v2.0?
Update Composer:
composer require devrabiul/laravel-toaster-magic "^2.0"
Republish Assets (Critical for new CSS/JS):
php artisan vendor:publish --tag=toast-magic-assets --force
Check Config:
If you have a published config file, add the new options:
'options' => [ 'theme' => 'default', 'gradient_enable' => false, 'color_mode' => false, ], 'livewire_version' => 'v3',
v2.0 transforms Laravel Toaster Magic from a simple notification library into a UI-first experience. Whether you’re building a sleek SaaS (use iOS), a gaming platform (use Neon), or an admin dashboard (use Material), there is likely a theme for you.
Enjoy the magic! šāØ
Laravel News Links
https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2026/01/12/DBBLOG-50081-1-1260×597.png
Audit logging has become a crucial component of database security and compliance, helping organizations track user activities, monitor data access patterns, and maintain detailed records for regulatory requirements and security investigations. Database audit logs provide a comprehensive trail of actions performed within the database, including queries executed, changes made to data, and user authentication attempts. Managing these logs is more straightforward with a robust storage solution such as Amazon Simple Storage Service (Amazon S3).
Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora MySQL-Compatible Edition provide built-in audit logging capabilities, but customers might need to export and store these logs for long-term retention and analysis. Amazon S3 offers an ideal destination, providing durability, cost-effectiveness, and integration with various analytics tools.
In this post, we explore two approaches for exporting MySQL audit logs to Amazon S3: either using batching with a native export to Amazon S3 or processing logs in real time with Amazon Data Firehose.
The first solution involves batch processing by using the built-in audit log export feature in Amazon RDS for MySQL or Aurora MySQL-Compatible to export logs to Amazon CloudWatch Logs. Amazon EventBridge periodically triggers an AWS Lambda function. This solution creates a CloudWatch export task that sends the last one daysās of audit logs to Amazon S3. The period (one day) is configurable based on your requirements. This solution is the most cost-effective and practical if you donāt require the audit logs to be available in real-time within an S3 bucket. The following diagram illustrates this workflow.

The other proposed solution uses Data Firehose to immediately process the MySQL audit logs within CloudWatch Logs and send them to an S3 bucket. This approach is suitable for business use cases that require immediate export of audit logs when theyāre available within CloudWatch Logs. The following diagram illustrates this workflow.

Once youāve implemented either of these solutions, youāll have your Aurora MySQL or RDS for MySQL audit logs stored securely in Amazon S3. This opens up a wealth of possibilities for analysis, monitoring, and compliance reporting. Hereās what you can do with your exported audit logs:
By leveraging these capabilities, you can turn your audit logs from a passive security measure into an active tool for database management, security enhancement, and business intelligence.
The first solution used EventBridge to periodically trigger a Lambda function. This function creates a CloudWatch Log export task that sends a batch of log data to Amazon S3 at regular intervals. This method is well-suited for scenarios where you prefer to process logs in batches to optimize costs and resources.
The second solution uses Data Firehose to create a real-time audit log processing pipeline. This approach streams logs directly from CloudWatch to an S3 bucket, providing near real-time access to your audit data. In this context, āreal-timeā means that log data is processed and delivered synchronously as it is generated, rather than being sent in a pre-defined interval. This solution is ideal for scenarios requiring immediate access to log data or for high-volume logging environments.
Whether you choose the near real-time streaming approach or the scheduled export method, you will be well-equipped to managed your Aurora MySQL and RDS for MySQL audit logs effectively.
Before getting started, complete the following prerequisites:
Note: In audit logging, by default all users are logged which can potentially be costly.
aws s3api create-bucket --bucket <bucket_name>
After the command is complete, you will see an output similar to the following:
Note: Each solution has specific service components which are discussed in their respective sections.
In this solution, we create a Lambda function to export your audit log to Amazon S3 based on the schedule you set using EventBridge Scheduler. This solution offers a cost-efficient way to transfer audit log files within an S3 bucket in a scheduled manner.
The first step is to create an AWS Identity and Access Management (IAM) role responsible for allowing EventBridge Scheduler to invoke the Lambda function we will create later. Complete the following steps to create this role:
TrustPolicyForEventBridgeScheduler.json using your preferred text editor:nano TrustPolicyForEventBridgeScheduler.json
Note: Make sure to amend SourceAccount before saving into a file. The condition is used to prevents unauthorized access from other AWS accounts.
PermissionsForEventBridgeScheduler.json using your preferred text editor:nano PermissionsForEventBridgeScheduler.json
Note: Replace <LambdaFunctionName> with the name of the function youāll create later.
In this section, we created an IAM role with appropriate trust and permissions policies that allow EventBridge Scheduler to securely invoke Lambda functions from your AWS account. Next, weāll create another IAM role that defines the permissions that your Lambda function needs to execute its tasks.
The next step is to create an IAM role responsible for allowing Lambda to put records from CloudWatch into your S3 bucket. Complete the following steps to create this role:
nano TrustPolicyForLambda.json
PermissionsForLambda.json using your preferred text editor:nano PermissionsForLambda.json
To create a file with the code the Lambda function will invoke, complete the following steps:
lambda_function.py using your preferred text editor:nano lambda_function.py
import boto3
import os
import datetime
import logging
import time
from botocore.exceptions import ClientError, NoCredentialsError, BotoCoreError
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def check_active_export_tasks(client):
"""Check for any active export tasks"""
try:
response = client.describe_export_tasks()
active_tasks = [
task for task in response.get('exportTasks', [])
if task.get('status', {}).get('code') in ['RUNNING', 'PENDING']
]
return active_tasks
except ClientError as e:
logger.error(f"Error checking active export tasks: {e}")
return []
def wait_for_export_task_completion(client, max_wait_minutes=15, check_interval=60):
"""Wait for any active export tasks to complete"""
max_wait_seconds = max_wait_minutes * 60
waited_seconds = 0
while waited_seconds < max_wait_seconds:
active_tasks = check_active_export_tasks(client)
if not active_tasks:
logger.info("No active export tasks found, proceeding...")
return True
logger.info(f"Found {len(active_tasks)} active export task(s). Waiting {check_interval} seconds...")
for task in active_tasks:
task_id = task.get('taskId', 'Unknown')
status = task.get('status', {}).get('code', 'Unknown')
logger.info(f"Active task ID: {task_id}, Status: {status}")
time.sleep(check_interval)
waited_seconds += check_interval
logger.warning(f"Timed out waiting for export tasks to complete after {max_wait_minutes} minutes")
return False
def lambda_handler(event, context):
try:
required_env_vars = ['GROUP_NAME', 'DESTINATION_BUCKET', 'PREFIX', 'NDAYS']
missing_vars = [var for var in required_env_vars if not os.environ.get(var)]
if missing_vars:
error_msg = f"Missing required environment variables: {', '.join(missing_vars)}"
logger.error(error_msg)
return {
'statusCode': 400,
'body': {'error': error_msg}
}
GROUP_NAME = os.environ['GROUP_NAME'].strip()
DESTINATION_BUCKET = os.environ['DESTINATION_BUCKET'].strip()
PREFIX = os.environ['PREFIX'].strip()
NDAYS = os.environ['NDAYS'].strip()
MAX_WAIT_MINUTES = int(os.environ.get('MAX_WAIT_MINUTES', '30'))
CHECK_INTERVAL = int(os.environ.get('CHECK_INTERVAL', '60'))
RETRY_ON_CONCURRENT = os.environ.get('RETRY_ON_CONCURRENT', 'true').lower() == 'true'
if not all([GROUP_NAME, DESTINATION_BUCKET, PREFIX, NDAYS]):
error_msg = "Environment variables cannot be empty"
logger.error(error_msg)
return {
'statusCode': 400,
'body': {'error': error_msg}
}
try:
nDays = int(NDAYS)
if nDays <= 0:
raise ValueError("NDAYS must be a positive integer")
except ValueError as e:
error_msg = f"Invalid NDAYS value '{NDAYS}': {str(e)}"
logger.error(error_msg)
return {
'statusCode': 400,
'body': {'error': error_msg}
}
try:
currentTime = datetime.datetime.now()
StartDate = currentTime - datetime.timedelta(days=nDays)
EndDate = currentTime - datetime.timedelta(days=nDays - 1)
fromDate = int(StartDate.timestamp() * 1000)
toDate = int(EndDate.timestamp() * 1000)
if fromDate >= toDate:
raise ValueError("Invalid date range: fromDate must be less than toDate")
except (ValueError, OverflowError) as e:
error_msg = f"Date calculation error: {str(e)}"
logger.error(error_msg)
return {
'statusCode': 400,
'body': {'error': error_msg}
}
try:
BUCKET_PREFIX = os.path.join(PREFIX, StartDate.strftime('%Y{0}%m{0}%d').format(os.path.sep))
except Exception as e:
error_msg = f"Error creating bucket prefix: {str(e)}"
logger.error(error_msg)
return {
'statusCode': 500,
'body': {'error': error_msg}
}
logger.info(f"Starting export task for log group: {GROUP_NAME}")
logger.info(f"Date range: {StartDate.strftime('%Y-%m-%d')} to {EndDate.strftime('%Y-%m-%d')}")
logger.info(f"Destination: s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}")
try:
client = boto3.client('logs')
except NoCredentialsError:
error_msg = "AWS credentials not found"
logger.error(error_msg)
return {
'statusCode': 500,
'body': {'error': error_msg}
}
except Exception as e:
error_msg = f"Error creating boto3 client: {str(e)}"
logger.error(error_msg)
return {
'statusCode': 500,
'body': {'error': error_msg}
}
if RETRY_ON_CONCURRENT:
logger.info("Checking for active export tasks...")
active_tasks = check_active_export_tasks(client)
if active_tasks:
logger.info(f"Found {len(active_tasks)} active export task(s). Waiting for completion...")
if not wait_for_export_task_completion(client, MAX_WAIT_MINUTES, CHECK_INTERVAL):
return {
'statusCode': 409,
'body': {
'error': f'Active export task(s) still running after {MAX_WAIT_MINUTES} minutes',
'activeTaskCount': len(active_tasks)
}
}
try:
response = client.create_export_task(
logGroupName=GROUP_NAME,
fromTime=fromDate,
to=toDate,
destination=DESTINATION_BUCKET,
destinationPrefix=BUCKET_PREFIX
)
task_id = response.get('taskId', 'Unknown')
logger.info(f"Export task created successfully with ID: {task_id}")
return {
'statusCode': 200,
'body': {
'message': 'Export task created successfully',
'taskId': task_id,
'logGroup': GROUP_NAME,
'fromDate': StartDate.isoformat(),
'toDate': EndDate.isoformat(),
'destination': f"s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}"
}
}
except ClientError as e:
error_code = e.response['Error']['Code']
error_msg = e.response['Error']['Message']
if error_code == 'ResourceNotFoundException':
logger.error(f"Log group '{GROUP_NAME}' not found")
return {
'statusCode': 404,
'body': {'error': f"Log group '{GROUP_NAME}' not found"}
}
elif error_code == 'LimitExceededException':
logger.error(f"Export task limit exceeded (concurrent task running): {error_msg}")
active_tasks = check_active_export_tasks(client)
return {
'statusCode': 409,
'body': {
'error': 'Cannot create export task: Another export task is already running',
'details': error_msg,
'activeTaskCount': len(active_tasks),
'suggestion': 'Only one export task can run at a time. Please wait for the current task to complete or set RETRY_ON_CONCURRENT=true to auto-retry.'
}
}
elif error_code == 'InvalidParameterException':
logger.error(f"Invalid parameter: {error_msg}")
return {
'statusCode': 400,
'body': {'error': f"Invalid parameter: {error_msg}"}
}
elif error_code == 'AccessDeniedException':
logger.error(f"Access denied: {error_msg}")
return {
'statusCode': 403,
'body': {'error': f"Access denied: {error_msg}"}
}
else:
logger.error(f"AWS ClientError ({error_code}): {error_msg}")
return {
'statusCode': 500,
'body': {'error': f"AWS error: {error_msg}"}
}
except BotoCoreError as e:
error_msg = f"BotoCore error: {str(e)}"
logger.error(error_msg)
return {
'statusCode': 500,
'body': {'error': error_msg}
}
except Exception as e:
error_msg = f"Unexpected error creating export task: {str(e)}"
logger.error(error_msg)
return {
'statusCode': 500,
'body': {'error': error_msg}
}
except Exception as e:
error_msg = f"Unexpected error in lambda_handler: {str(e)}"
logger.error(error_msg, exc_info=True)
return {
'statusCode': 500,
'body': {'error': 'Internal server error'}
}
zip function.zip lambda_function.py
Complete the following steps to create a Lambda function:
The NDAYS variable in the preceding command will determine the dates of audit logs exported per invocation of the Lambda function. For example, if you plan on exporting logs one time per day to Amazon S3, set NDAYS=1, as shown in the preceding command.
Note: Reserved concurrency in Lambda sets a fixed limit on how many instances of your function can run simultaneously, like having a specific number of workers for a task. In this database export scenario, weāre limiting it to 2 concurrent executions to prevent overwhelming the database, avoid API throttling, and ensure smooth, controlled exports. This limitation helps maintain system stability, prevents resource contention, and keeps costs in check
In this section, we created a Lambda function that will handle the CloudWatch log exports, configured its essential parameters including environment variables, and set a concurrency limit to ensure controlled execution. Next, weāll create an EventBridge schedule that will automatically trigger this Lambda function at specified intervals to perform the log exports.
Complete the following steps to create an EventBridge schedule to invoke the Lambda function at an interval of your choosing:
The schedule-expression parameter in the preceding command must be equal to the environmental variable NDAYS in the previously created Lambda function.
This solution provides an efficient, scheduled approach to exporting RDS audit logs to Amazon S3 using AWS Lambda and EventBridge Scheduler. By leveraging these serverless components, weāve created a cost-effective, automated system that periodically transfers audit logs to S3 for long-term storage and analysis. This method is particularly useful for organizations that need regular, batch-style exports of their database audit logs, allowing for easier compliance reporting and historical data analysis.
While the first solution offers a scheduled, batch-processing approach, some scenarios require a more real-time solution for audit log processing. In our next solution, weāll explore how to create a near real-time audit log processing system using Amazon Kinesis Data Firehose. This approach will allow for continuous streaming of audit logs from RDS to S3, providing almost immediate access to log data.
In this section, we review how to create a near real-time audit log export to Amazon S3 using the power of Data Firehose. With this solution, you can directly load the latest audit log files to an S3 bucket for quick analysis, manipulation, or other purposes.
The first step is to create an IAM role responsible for allowing CloudWatch Logs to put records into the Firehose delivery stream (CWLtoDataFirehoseRole). Complete the following steps to create this role:
nano TrustPolicyForCWL.json
nano PermissionsForCWL.json
The next step is to create an IAM role (DataFirehosetoS3Role) responsible for allowing the Firehose delivery stream to insert the audit logs into an S3 bucket. Complete the following steps to create this role:
nano PermissionsForCWL.json
nano PermissionsForCWL.json
Now you create the Firehose delivery stream to allow near real-time transfer of MySQL audit logs from CloudWatch Logs to your S3 bucket. Complete the following steps:
describe-delivery-stream command to check the status of the delivery stream. Note the DeliveryStreamDescription.DeliveryStreamARN value, to use in a later step:aws firehose describe-delivery-stream --delivery-stream-name <delivery-stream-name>
destination-arn of your Firehose delivery stream:Your near real-time MySQL audit log solution is now properly configured and will begin delivering MySQL audit logs to your S3 bucket through the Firehose delivery stream.
To clean up your resources, complete the following steps (depending on which solution you used):
In this post, weāve presented two solutions for managing Aurora MySQL or RDS for MySQL audit logs, each offering unique benefits for different business use cases.
We encourage you to implement these solutions in your own environment and share your experiences, challenges, and success stories in the comments section. Your feedback and real-world implementations can help fellow AWS users choose and adapt these solutions to best fit their specific audit logging needs.
Planet for the MySQL Community