The Computer History Museum, based in Mountain View, California, looks like a fine way to spend an afternoon for anyone interested in, well, the history of computers. And if that description fits you but you’re not in California, then rejoice, because CHM recently launched OpenCHM, an excellent online portal designed to allow exploration of the museum from afar.
You can, of course, just click around to see what catches your eye, but if that feels too unfocused, you can also go straight to the collection highlights. As you might expect, these include a solid selection of early computers and microcomputers, along with photos, records, and other objects of historicimport. Several objects predate the information age, including a Jacquard loom and a copy of The Adams Cable Codex, a fascinating 1894 book that catalogs hundreds of code words that were used to save space when sending messages via cable. Happily, there’s a full scan of the same book at the Internet Archive, because the CHM’s documentation on the latter is rather minimal.
This is the case throughout the site. In fairness, OpenCHM is still in beta, and hopefully the item descriptions will be fleshed out as the site develops—but as it stands, their terse nature means that some of the objects on show are disappointingly inscrutable. For example, it took a bit of googling to work out what on earth a klystron is, and the CHM’s description isn’t much help, noting only that “This item is mounted on a wooden base.” (For the record, a klystron is a vacuum tube amplifier that looks cool as hell.)
Still, such quibbles aside, there’s a wealth of material to explore here, and on the whole, Open CHM makes doing so both easy and enjoyable. It provides multiple entry points to the collection. In addition to the aforementioned highlights page and a series of curated collections, there’s something called the “Discovery Wall”. This is described as “a dynamic showcase of artifacts chosen by online visitors”, and it’s certainly interesting to see what catches people’s attention. At the time of our virtual visit, items on display on the Discovery Wall included an alarmingly yellow Atari t-shirt from 1977, a Tamagotchi (in its original packaging!), a placard from the 2023 Writers’ Guild strike (“Don’t let bots write your shows!”) and a Microsoft PS/2 mouse, the mere sight of which is likely to cause shudders in anyone with memories of flipping one of these over to pull out the trackball and clean months’ worth of accumulated crud from the two little rollers inside.
Perhaps the single most poignant item we came across, however, is a copy of Ted Nelson’s self-published 1974 opus Computer Lib/Dream Machines, which promoted computer literacy and the liberation Nelson hoped it would bring. The document is strikingly forward-thinking—amongst other things, it predicted hypertext, of which Nelson was an early proponent—but the technoutopianism on display seems both charmingly innocent and painfully naïve today. “New Freedoms Through Computer Screens”, promises the rear cover. If only they knew.
When people in Star Wars get killed or dismembered by a lightsaber, it’s a pretty neat, tidy, and speedy event. But the morbid Mr. Death explains what’s more likely to happen to a human struck by a 20,000°C plasma beam when taking real-world physics into account. It sounds quite awful compared to what we’ve seen on screen.
Section 441 of the Homeland Security Act transfers immigration enforcement functions to the Under Secretary for Border and Transportation Security. This included Border Patrol, INS, detention and removal amongst others.
Problem #2: The study doesn’t differentiate between illegal and legal immigrants.
Considering the restrictions and caveats to legal immigration, it would stand to reason that we aren’t allowing the criminal element to immigrate here. For example, one of the requirements to eligibility is to be a person of good moral character. There is an expectation in the vetting process for a legal immigrant that they are not the kind of person who would commit a crime.
Even factcheck.org admits there aren’t nationwide statistics on all crimes committed by illegal immigrants, only estimates extracted from smaller samples.
Then again, every person who has entered the country illegally has committed a crime, making the illegal immigrant crime rate 100%. Which leads us to:
3. Crossing the border is not a crime, and no human is illegal.
You’ve heard it before: "Entering the United States is not a crime; it’s just a misdemeanor."
Who can be removed? Anyone who came here by illegal means, including people who have violated conditions of entry. Unlawful voters, traffickers, drug abusers…there are a lot of offenses that are deportable. Please peruse at your leisure.
As for no human being illegal… Humans can be criminals. Again. That’s how crime works. If you are committing a crime, you are subject to legal action. The word "alien" as a legal term for foreign nationals appeared in the Naturalization Act of 1790 and the Alien and Sedition Acts of 1798. The word "illegal" added on simply becomes the descriptor that it is an foreign national in the country illegally. "Illegal alien" can be found as far back as 1924, the same year the United States Border Patrol was established. The Supreme Court used the in a 1976 case United States v. Martinze-Fuerte. Bill Clinton used the term in his 1995 State of the Union address. As the term "alien" is still used in federal statutes and regulations, the term "illegal alien" is still appropriate when referring to people who have entered and/or are in the United States illegally.
Bottom line: The United States of America is a country with laws and a border. It is illegal to cross the border in any way that the United States does not define as lawful. If it is not lawful, it is a crime. Anyone who has come to the United States of America in a way that does not follow US law has committed a crime. That’s how crime works. I don’t know why I have to explain that.
Laravel Debugbar v4.0.0 marks a major release with package ownership transferring from barryvdh/laravel-debugbar to fruitcake/laravel-debugbar. This version brings php-debugbar 3.x support and includes several new collectors and improvements for modern Laravel applications.
HTTP Client collector for tracking outbound API requests
Inertia collector for Inertia.js data tracking
Improved Livewire support for versions 2, 3, and 4
Remove jQuery in favor of modern JS
Improved performance and delayed rendering
Laravel Octane compatibility for long-running processes
And more
What’s New
HTTP Client Collector
This release adds a new collector that tracks HTTP client requests made through Laravel’s HTTP client. The collector provides visibility into outbound API calls, making it easier to debug external service integrations and monitor response times.
Inertia Collector
For applications using Inertia.js, the new Inertia collector tracks shared data and props passed to Inertia components. This helps debug data flow in Inertia-powered applications.
Enhanced Livewire Support
The debugbar now includes improved component detection for Livewire versions 2, 3, and 4. This provides better visibility into Livewire component lifecycle events and data updates across all currently supported Livewire versions.
Laravel Octane Compatibility
This version includes better handling for Laravel Octane and other long-running server processes. The debugbar now properly manages state across requests in persistent application environments.
Cache Usage Estimation
The cache widget now displays estimated byte usage, giving developers better insight into cache memory consumption during request processing.
Debugbar Position and Themes
This version has many UI improvements and settings like debugbar position, auto-hiding empty collectors, themes (Dark, Light, Auto), and more:
Breaking Changes
Package Ownership and Installation
The package has moved from barryvdh/laravel-debugbar to fruitcake/laravel-debugbar, requiring manual removal and reinstallation:
The namespace has changed from the original structure to Fruitcake\LaravelDebugbar. You’ll need to update any direct references to debugbar classes in your codebase.
Removed Features
Several features have been removed in this major version:
Socket storage support has been removed
Lumen framework support is no longer included
PDO extension functionality has been dropped
Configuration Changes
Default configuration values have been updated, and deprecated configuration options have been removed. Review your config/debugbar.php file and compare it with the published configuration from the new package.
Upgrade Notes
This is not a standard upgrade. You must manually remove the old package and install the new one using the commands shown above. After installation, update any namespace references in your code from the old barryvdh namespace to Fruitcake\LaravelDebugbar.
Review your configuration file for deprecated options and compare with the new defaults. The package maintains compatibility with Laravel 9.x through 12.x. See the upgrade docs for details on upgrading from 3.x to 4.x.
This major release takes LarAgent to the next level – focused on structured responses, reliable context management, richer tooling, and production-grade agent behavior.
Designed for both development teams and business applications where predictability, observability, and scalability matter
🛠️ Structured Outputs with DataModel
LarAgent introduces DataModel-based structured responses, moving beyond arrays to typed, predictable output shapes you can rely on in real apps.
What it means
Type-safe outputs — no more guessing keys or parsing unstructured text
Responses conform to a defined schema, you receive DTO-like object as response as well as tool arguments
Easier integration with UIs, APIs, and automated workflows
Full support for nesting, collections, nullables, union type and everything you need to define structure of any complexity
Example
use LarAgent\Core\Abstractions\DataModel;
use LarAgent\Attributes\Desc;
class WeatherResponse extends DataModel
{
#[Desc('Temperature in Celsius')]
public float $temperature;
#[Desc('Condition (sunny/cloudy/etc.)')]
public string $condition;
}
class WeatherAgent extends Agent
{
protected $responseSchema = WeatherResponse::class;
}
$response = WeatherAgent::ask('Weather in Tbilisi?');
echo $response->temperature;
🗄️ Storage Abstraction Layer
v1.0 introduces a pluggable storage layer for chat history and context, enabling persistent, switchable, and scalable storage drivers.
What’s new
Eloquent & SimpleEloquent drivers included
Swap between memory, cache, or database without rewriting agents
Fallback mechanism with one primary and multiple secondary drivers
class MyAgent extends Agent
{
protected $history = [
CacheStorage::class, // Primary: read first, write first
FileStorage::class, // Fallback: used if primary fails on read
];
}
🔄 Intelligent Context Truncation
Long chats are inevitable, but hitting token limits shouldn’t be catastrophic. LarAgent now provides smart context management strategies.
Available strategies
Sliding Window: drop the oldest messages
Summarization: compress context using AI summaries
Symbolization: replace old messages with symbolic tags
👉 Save on token costs while preserving context most relevant to the current conversation.
🧠 Enhanced Session + Identity Management
Context now supports identity-based sessions which is created by user id, chat name, agent name and group. Identity storage holds all identity keys that makes context of any agent available via the Context facade to manage. For example:
Usage tracking is based on session identity – it means that you can check token usage by user, by agent and/or by chat – allowing you to implement comprehensive statistics and reporting capabilities.
⚠️ Breaking Changes
v1.0 includes a few breaking API changes. Make sure to check the migration guide.
These days, companies like Stihl and Makita sell multi-heads. These are battery-powered motors that can drive a variety of common landscaping attachments, like string trimmers and hedge cutters.
Uniquely, Makita also offers this Snow Thrower attachment:
The business end is 12" wide and can handle a 6" depth of snow at a time. Tiltable vanes on the inside let you control whether you want to throw the snow to the left, to the right or straight ahead. The company says you can clear about five parking spaces with two 18V batteries.
So how well does it work? Seeing is believing. Here’s Murray Kruger of Kruger Construction putting it through its paces:
Audit logging has become a crucial component of database security and compliance, helping organizations track user activities, monitor data access patterns, and maintain detailed records for regulatory requirements and security investigations. Database audit logs provide a comprehensive trail of actions performed within the database, including queries executed, changes made to data, and user authentication attempts. Managing these logs is more straightforward with a robust storage solution such as Amazon Simple Storage Service (Amazon S3).
In this post, we explore two approaches for exporting MySQL audit logs to Amazon S3: either using batching with a native export to Amazon S3 or processing logs in real time with Amazon Data Firehose.
Solution overview
The first solution involves batch processing by using the built-in audit log export feature in Amazon RDS for MySQL or Aurora MySQL-Compatible to export logs to Amazon CloudWatch Logs. Amazon EventBridge periodically triggers an AWS Lambda function. This solution creates a CloudWatch export task that sends the last one days’s of audit logs to Amazon S3. The period (one day) is configurable based on your requirements. This solution is the most cost-effective and practical if you don’t require the audit logs to be available in real-time within an S3 bucket. The following diagram illustrates this workflow.
The other proposed solution uses Data Firehose to immediately process the MySQL audit logs within CloudWatch Logs and send them to an S3 bucket. This approach is suitable for business use cases that require immediate export of audit logs when they’re available within CloudWatch Logs. The following diagram illustrates this workflow.
Use cases
Once you’ve implemented either of these solutions, you’ll have your Aurora MySQL or RDS for MySQL audit logs stored securely in Amazon S3. This opens up a wealth of possibilities for analysis, monitoring, and compliance reporting. Here’s what you can do with your exported audit logs:
Run Amazon Athena queries: With your audit logs in S3, you can use Amazon Athena to run SQL queries directly against your log data. This allows you to quickly analyze user activities, identify unusual patterns, or generate compliance reports. For example, you could query for all actions performed by a specific user, or find all failed login attempts within a certain time frame.
Create Amazon Quick Sight dashboards: Using Amazon Quick Sight in conjunction with Athena, you can create visual dashboards of your audit log data. This can help you spot trends over time, such as peak usage hours, most active users, or frequently accessed database objects.
Set up automated alerting: By combining your S3-stored logs with AWS Lambda and Amazon SNS, you can create automated alerts for specific events. For instance, you could set up a system to notify security personnel if there’s an unusual spike in failed login attempts or if sensitive tables are accessed outside of business hours.
Perform long-term analysis: With your audit logs centralized in S3, you can perform long-term trend analysis. This could help you understand how database usage patterns change over time, informing capacity planning and security policies.
Meet compliance requirements: Many regulatory frameworks require retention and analysis of database audit logs. With your logs in S3, you can easily demonstrate compliance with these requirements, running reports as needed for auditors.
By leveraging these capabilities, you can turn your audit logs from a passive security measure into an active tool for database management, security enhancement, and business intelligence.
Comparing solutions
The first solution used EventBridge to periodically trigger a Lambda function. This function creates a CloudWatch Log export task that sends a batch of log data to Amazon S3 at regular intervals. This method is well-suited for scenarios where you prefer to process logs in batches to optimize costs and resources.
The second solution uses Data Firehose to create a real-time audit log processing pipeline. This approach streams logs directly from CloudWatch to an S3 bucket, providing near real-time access to your audit data. In this context, “real-time” means that log data is processed and delivered synchronously as it is generated, rather than being sent in a pre-defined interval. This solution is ideal for scenarios requiring immediate access to log data or for high-volume logging environments.
Whether you choose the near real-time streaming approach or the scheduled export method, you will be well-equipped to managed your Aurora MySQL and RDS for MySQL audit logs effectively.
Prerequisites for both solutions
Before getting started, complete the following prerequisites:
Create or have an existing RDS for MySQL instance or Aurora MySQL cluster.
Create an S3 bucket to store the MySQL audit logs using the below AWS CLI command:
aws s3api create-bucket --bucket <bucket_name>
After the command is complete, you will see an output similar to the following:
Note: Each solution has specific service components which are discussed in their respective sections.
Solution #1: Peform audit log batch processing with EventBridge and Lambda
In this solution, we create a Lambda function to export your audit log to Amazon S3 based on the schedule you set using EventBridge Scheduler. This solution offers a cost-efficient way to transfer audit log files within an S3 bucket in a scheduled manner.
Create IAM role for EventBridge Scheduler
The first step is to create an AWS Identity and Access Management (IAM) role responsible for allowing EventBridge Scheduler to invoke the Lambda function we will create later. Complete the following steps to create this role:
Connect to a terminal with the AWS CLI or CloudShell.
Create a file named TrustPolicyForEventBridgeScheduler.json using your preferred text editor:
nano TrustPolicyForEventBridgeScheduler.json
Insert the following trust policy into the JSON file:
Note: Make sure to amend SourceAccount before saving into a file. The condition is used to prevents unauthorized access from other AWS accounts.
Create a file named PermissionsForEventBridgeScheduler.json using your preferred text editor:
nano PermissionsForEventBridgeScheduler.json
Insert the following permissions into the JSON file:
Note: Replace <LambdaFunctionName> with the name of the function you’ll create later.
Use the following AWS CLI command to create the IAM role for EventBridge Scheduler to invoke the Lambda function:
Create the IAM policy and attach it to the previously created IAM role:
In this section, we created an IAM role with appropriate trust and permissions policies that allow EventBridge Scheduler to securely invoke Lambda functions from your AWS account. Next, we’ll create another IAM role that defines the permissions that your Lambda function needs to execute its tasks.
Create IAM role for Lambda
The next step is to create an IAM role responsible for allowing Lambda to put records from CloudWatch into your S3 bucket. Complete the following steps to create this role:
Connect to a terminal with the AWS CLI or CloudShell.
Create and write to a JSON file for the IAM trust policy using your preferred text editor:
nano TrustPolicyForLambda.json
Insert the following trust policy into the JSON file:
Use the following AWS CLI command to create the IAM role for Lambda to insert records from CloudWatch to Amazon S3:
Create a file named PermissionsForLambda.json using your preferred text editor:
nano PermissionsForLambda.json
Insert the following permissions into the JSON file:
Create the IAM policy and attach it to the previously created IAM role:
Create ZIP file for the Python Lambda function
To create a file with the code the Lambda function will invoke, complete the following steps:
Create and write to a file named lambda_function.py using your preferred text editor:
nano lambda_function.py
Within the file, insert the following code:
import boto3
import os
import datetime
import logging
import time
from botocore.exceptions import ClientError, NoCredentialsError, BotoCoreError
logger = logging.getLogger()
logger.setLevel(logging.INFO)defcheck_active_export_tasks(client):"""Check for any active export tasks"""try:
response = client.describe_export_tasks()
active_tasks =[
task for task in response.get('exportTasks',[])if task.get('status',{}).get('code')in['RUNNING','PENDING']]return active_tasks
except ClientError as e:
logger.error(f"Error checking active export tasks: {e}")return[]defwait_for_export_task_completion(client, max_wait_minutes=15, check_interval=60):"""Wait for any active export tasks to complete"""
max_wait_seconds = max_wait_minutes *60
waited_seconds =0while waited_seconds < max_wait_seconds:
active_tasks = check_active_export_tasks(client)ifnot active_tasks:
logger.info("No active export tasks found, proceeding...")returnTrue
logger.info(f"Found {len(active_tasks)} active export task(s). Waiting {check_interval} seconds...")for task in active_tasks:
task_id = task.get('taskId','Unknown')
status = task.get('status',{}).get('code','Unknown')
logger.info(f"Active task ID: {task_id}, Status: {status}")
time.sleep(check_interval)
waited_seconds += check_interval
logger.warning(f"Timed out waiting for export tasks to complete after {max_wait_minutes} minutes")returnFalsedeflambda_handler(event, context):try:
required_env_vars =['GROUP_NAME','DESTINATION_BUCKET','PREFIX','NDAYS']
missing_vars =[var for var in required_env_vars ifnot os.environ.get(var)]if missing_vars:
error_msg =f"Missing required environment variables: {', '.join(missing_vars)}"
logger.error(error_msg)return{'statusCode':400,'body':{'error': error_msg}}
GROUP_NAME = os.environ['GROUP_NAME'].strip()
DESTINATION_BUCKET = os.environ['DESTINATION_BUCKET'].strip()
PREFIX = os.environ['PREFIX'].strip()
NDAYS = os.environ['NDAYS'].strip()
MAX_WAIT_MINUTES =int(os.environ.get('MAX_WAIT_MINUTES','30'))
CHECK_INTERVAL =int(os.environ.get('CHECK_INTERVAL','60'))
RETRY_ON_CONCURRENT = os.environ.get('RETRY_ON_CONCURRENT','true').lower()=='true'ifnotall([GROUP_NAME, DESTINATION_BUCKET, PREFIX, NDAYS]):
error_msg ="Environment variables cannot be empty"
logger.error(error_msg)return{'statusCode':400,'body':{'error': error_msg}}try:
nDays =int(NDAYS)if nDays <=0:raise ValueError("NDAYS must be a positive integer")except ValueError as e:
error_msg =f"Invalid NDAYS value '{NDAYS}': {str(e)}"
logger.error(error_msg)return{'statusCode':400,'body':{'error': error_msg}}try:
currentTime = datetime.datetime.now()
StartDate = currentTime - datetime.timedelta(days=nDays)
EndDate = currentTime - datetime.timedelta(days=nDays -1)
fromDate =int(StartDate.timestamp()*1000)
toDate =int(EndDate.timestamp()*1000)if fromDate >= toDate:raise ValueError("Invalid date range: fromDate must be less than toDate")except(ValueError, OverflowError)as e:
error_msg =f"Date calculation error: {str(e)}"
logger.error(error_msg)return{'statusCode':400,'body':{'error': error_msg}}try:
BUCKET_PREFIX = os.path.join(PREFIX, StartDate.strftime('%Y{0}%m{0}%d').format(os.path.sep))except Exception as e:
error_msg =f"Error creating bucket prefix: {str(e)}"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}
logger.info(f"Starting export task for log group: {GROUP_NAME}")
logger.info(f"Date range: {StartDate.strftime('%Y-%m-%d')} to {EndDate.strftime('%Y-%m-%d')}")
logger.info(f"Destination: s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}")try:
client = boto3.client('logs')except NoCredentialsError:
error_msg ="AWS credentials not found"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}except Exception as e:
error_msg =f"Error creating boto3 client: {str(e)}"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}if RETRY_ON_CONCURRENT:
logger.info("Checking for active export tasks...")
active_tasks = check_active_export_tasks(client)if active_tasks:
logger.info(f"Found {len(active_tasks)} active export task(s). Waiting for completion...")ifnot wait_for_export_task_completion(client, MAX_WAIT_MINUTES, CHECK_INTERVAL):return{'statusCode':409,'body':{'error':f'Active export task(s) still running after {MAX_WAIT_MINUTES} minutes','activeTaskCount':len(active_tasks)}}try:
response = client.create_export_task(
logGroupName=GROUP_NAME,
fromTime=fromDate,
to=toDate,
destination=DESTINATION_BUCKET,
destinationPrefix=BUCKET_PREFIX
)
task_id = response.get('taskId','Unknown')
logger.info(f"Export task created successfully with ID: {task_id}")return{'statusCode':200,'body':{'message':'Export task created successfully','taskId': task_id,'logGroup': GROUP_NAME,'fromDate': StartDate.isoformat(),'toDate': EndDate.isoformat(),'destination':f"s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}"}}except ClientError as e:
error_code = e.response['Error']['Code']
error_msg = e.response['Error']['Message']if error_code =='ResourceNotFoundException':
logger.error(f"Log group '{GROUP_NAME}' not found")return{'statusCode':404,'body':{'error':f"Log group '{GROUP_NAME}' not found"}}elif error_code =='LimitExceededException':
logger.error(f"Export task limit exceeded (concurrent task running): {error_msg}")
active_tasks = check_active_export_tasks(client)return{'statusCode':409,'body':{'error':'Cannot create export task: Another export task is already running','details': error_msg,'activeTaskCount':len(active_tasks),'suggestion':'Only one export task can run at a time. Please wait for the current task to complete or set RETRY_ON_CONCURRENT=true to auto-retry.'}}elif error_code =='InvalidParameterException':
logger.error(f"Invalid parameter: {error_msg}")return{'statusCode':400,'body':{'error':f"Invalid parameter: {error_msg}"}}elif error_code =='AccessDeniedException':
logger.error(f"Access denied: {error_msg}")return{'statusCode':403,'body':{'error':f"Access denied: {error_msg}"}}else:
logger.error(f"AWS ClientError ({error_code}): {error_msg}")return{'statusCode':500,'body':{'error':f"AWS error: {error_msg}"}}except BotoCoreError as e:
error_msg =f"BotoCore error: {str(e)}"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}except Exception as e:
error_msg =f"Unexpected error creating export task: {str(e)}"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}except Exception as e:
error_msg =f"Unexpected error in lambda_handler: {str(e)}"
logger.error(error_msg, exc_info=True)return{'statusCode':500,'body':{'error':'Internal server error'}}
Zip the file using the following command:
zip function.zip lambda_function.py
Create Lambda function
Complete the following steps to create a Lambda function:
Connect to a terminal with the AWS CLI or CloudShell.
Run the following command, which references the zip file previously created:
The NDAYS variable in the preceding command will determine the dates of audit logs exported per invocation of the Lambda function. For example, if you plan on exporting logs one time per day to Amazon S3, set NDAYS=1, as shown in the preceding command.
Add concurrency limits to keep executions in control:
Note: Reserved concurrency in Lambda sets a fixed limit on how many instances of your function can run simultaneously, like having a specific number of workers for a task. In this database export scenario, we’re limiting it to 2 concurrent executions to prevent overwhelming the database, avoid API throttling, and ensure smooth, controlled exports. This limitation helps maintain system stability, prevents resource contention, and keeps costs in check
In this section, we created a Lambda function that will handle the CloudWatch log exports, configured its essential parameters including environment variables, and set a concurrency limit to ensure controlled execution. Next, we’ll create an EventBridge schedule that will automatically trigger this Lambda function at specified intervals to perform the log exports.
Create EventBridge schedule
Complete the following steps to create an EventBridge schedule to invoke the Lambda function at an interval of your choosing:
Connect to a terminal with the AWS CLI or CloudShell.
Run the following command:
The schedule-expression parameter in the preceding command must be equal to the environmental variable NDAYS in the previously created Lambda function.
This solution provides an efficient, scheduled approach to exporting RDS audit logs to Amazon S3 using AWS Lambda and EventBridge Scheduler. By leveraging these serverless components, we’ve created a cost-effective, automated system that periodically transfers audit logs to S3 for long-term storage and analysis. This method is particularly useful for organizations that need regular, batch-style exports of their database audit logs, allowing for easier compliance reporting and historical data analysis.
While the first solution offers a scheduled, batch-processing approach, some scenarios require a more real-time solution for audit log processing. In our next solution, we’ll explore how to create a near real-time audit log processing system using Amazon Kinesis Data Firehose. This approach will allow for continuous streaming of audit logs from RDS to S3, providing almost immediate access to log data.
Solution 2: Create near real-time audit log processing with Amazon Data Firehose
In this section, we review how to create a near real-time audit log export to Amazon S3 using the power of Data Firehose. With this solution, you can directly load the latest audit log files to an S3 bucket for quick analysis, manipulation, or other purposes.
Create IAM role for CloudWatch Logs
The first step is to create an IAM role responsible for allowing CloudWatch Logs to put records into the Firehose delivery stream (CWLtoDataFirehoseRole). Complete the following steps to create this role:
Connect to a terminal with the AWS CLI or CloudShell.
Create and write to a JSON file for the IAM trust policy using your preferred text editor:
nano TrustPolicyForCWL.json
Insert the following trust policy into the JSON file:
Create and write to a new JSON file for the IAM permissions policy using your preferred text editor:
nano PermissionsForCWL.json
Insert the following permissions into the JSON file:
Use the following AWS CLI command to create the IAM role for CloudWatch Logs to insert records into the Firehose delivery stream:
Create the IAM policy and attach it to the previously created IAM role:
Create IAM role for Firehose delivery stream
The next step is to create an IAM role (DataFirehosetoS3Role) responsible for allowing the Firehose delivery stream to insert the audit logs into an S3 bucket. Complete the following steps to create this role:
Connect to a terminal with the AWS CLI or CloudShell.
Create and write to a JSON file for the IAM trust policy using your preferred text editor:
nano PermissionsForCWL.json
Insert the following trust policy into the JSON file:
Create and write to a new JSON file for the IAM permissions using your preferred text editor:
nano PermissionsForCWL.json
Insert the following permissions into the JSON file:
Use the following AWS CLI command to create the IAM role for Data Firehose to perform operations on the S3 bucket:
Create the IAM policy and attach it to the previously created IAM role:
Create the Firehose delivery stream
Now you create the Firehose delivery stream to allow near real-time transfer of MySQL audit logs from CloudWatch Logs to your S3 bucket. Complete the following steps:
Create the Firehose delivery stream with the following AWS CLI command. Setting the buffer interval and size determines how long your data is buffered before being delivered to the S3 bucket. For more information, refer to AWS documentation. In this example, we use the default values:
Wait until the Firehose delivery stream becomes active (this might take a few minutes). You can use the Firehose CLI describe-delivery-stream command to check the status of the delivery stream. Note the DeliveryStreamDescription.DeliveryStreamARN value, to use in a later step:
After the Firehose delivery stream is in an active state, create a CloudWatch Logs subscription filter. This subscription filter immediately starts the flow of near real-time log data from the chosen log group to your Firehose delivery stream. Make sure to provide the log group name that you want to push to Amazon S3 and properly copy the destination-arn of your Firehose delivery stream:
Your near real-time MySQL audit log solution is now properly configured and will begin delivering MySQL audit logs to your S3 bucket through the Firehose delivery stream.
Clean up
To clean up your resources, complete the following steps (depending on which solution you used):
In this post, we’ve presented two solutions for managing Aurora MySQL or RDS for MySQL audit logs, each offering unique benefits for different business use cases.
We encourage you to implement these solutions in your own environment and share your experiences, challenges, and success stories in the comments section. Your feedback and real-world implementations can help fellow AWS users choose and adapt these solutions to best fit their specific audit logging needs.