How to use Apple’s new Journal app with the iOS 17.2 update

https://s.yimg.com/os/creatr-uploaded-images/2023-12/7fc6a590-9873-11ee-bb6b-c5c58a96588d

Apple’s AI-powered Journal app is finally here. The new diary entry writing tool was first teased for iOS 17 back in June, but it only became available on Monday with the new iPhone update — nearly three months after iOS 17 itself came out. After Apple released iOS 17.2, iPhone users can now access to the Journal app, which allows users to jot down their thoughts in a digital diary. Journaling is a practice that can improve mental wellbeing and it can also be used to fuel creative projects.

You can create traditional text entries, add voice recordings to your notes, or include recent videos or pictures. If you need inspiration, AI-derived text prompts can offer suggestions for what to write or create an entry for next. The app also predicts and proposes times for you to create a new entry based on your recent iPhone activity, which can include newer photos and videos, location history, recently listened-to playlists, and workout habits. This guide will walk you through how to get started with the Journal app and personalize your experience.

How to create a new entry in the Journal app on iPhone

Apple Journal App screengrab
Malak Saleh

When you open the Journal app, tap the + button at the bottom of the page to create a new entry. If you want to start with a blank slate, when you tap ‘New Entry’ an empty page will appear and from there you can start typing text. You can add in recent photos from your library when you tap the photos icon below the text space, take a photo in the moment and add it to your entry or include a recorded voice memo when you tap the voice icon. You can also add locations to your entry when you tap the arrow icon at the bottom right of an entry page. This feature might be helpful for travel bloggers looking back at their trips abroad. You can edit the date of an entry at the top of the page.

Alternatively, you can create a post based on recent or recommended activities that your phone compiled — say, pictures, locations from events you attended, or contacts you recently interacted with. The recent tab will show you, in chronological order, people, photos and addresses that can inspire entries based on recent activities. The recommended tab pulls from highlighted images automatically selected from your photo memories. For example, a selection of portraits from 2022 can appear as a recommendation to inspire your next written entry. Some suggestions underneath the recommendation tab may appear within the app with ‘Writing prompts.’ For example, a block of text may appear with a question like, “What was the highlight of your trip?”

Apple Journal App
Malak Saleh

Scheduling, bookmarking and filtering

If you’re not free to write when a suggestion is made, you can also save specific moments you want to journal about and write at a later time. Using the journaling schedule feature, you can set a specific time to be notified to create an entry, which will help a user make journaling a consistent practice. Go to the Settings app on your iPhone and search for the Journal app. Turn on the ‘Journaling schedule’ feature and personalize the days and times you would like to be reminded to write entries. As a side note, in Settings, you can also opt to lock your journal using your device passcode or Face ID.

Settings to schedule journal sessions
Malak Saleh

You can also organize your entries within the app using the bookmarking feature, so you can filter and find them at your own convenience. After creating an entry, tap the three dots at the bottom of your page and scroll down to tap the bookmark tab. This is the same place where you can delete or edit a journal entry.

Later on, if you want to revisit a bookmarked entry, tap the three-line icon at the corner of the main journal page to select the filter you would like applied to your entries. You can select to only view bookmarked entries, entries with photos, entries with recorded audio and see entries with places or locations. This might be helpful when your journal starts to fill up with recordings.

Adding music, workouts and other off-platform entries into your journal app

Using your streaming app of choice, (Apple Music, Spotify or Amazon Music), you can integrate specific tracks or podcast episodes into your entries by tapping three buttons at the bottom of your screen that opens up the option to ‘share your music.’ The option to share a track to the Journal app should appear and it will sit at the top of a blank entry when you open the app.

You can use the same method with other applications, like Apple’s Fitness app. You can share and export a logged workout into your journal and start writing about that experience.

Amazon Music
Malak Saleh

This article originally appeared on Engadget at https://www.engadget.com/how-to-use-apples-new-journal-app-with-the-ios-172-update-164518403.html?src=rssEngadget

PHP FPM status card for Laravel Pulse

https://opengraph.githubassets.com/2bff59fe8040a87ba2fa7f87c30df23ca5392d7bf7f0411375a4cc4a86b43e15/maantje/pulse-php-fpm

PHP FPM status card for Laravel Pulse

Get real-time insights into the status of your PHP FPM with this convenient card for Laravel Pulse.

Example

Drag Racing

Installation

Install the package using Composer:

composer require maantje/pulse-php-fpm

Enable PHP FPM status path

Configure your PHP FPM status path in your FPM configuration:

Register the recorder

In your pulse.php configuration file, register the PhpFpmRecorder with the desired settings:

return [
    // ...
    
    'recorders' => [
        PhpFpmRecorder::class => [
            // Optionally set a server name gethostname() is the default
            'server_name' => env('PULSE_SERVER_NAME', gethostname()),
            // Optionally set a status path the current value is the default
            'status_path' => 'localhost:9000/status', // with unix socket unix:/var/run/php-fpm/web.sock/status
            // Optionally give datasets, these are the default values.
            // Omitting a dataset or setting the value to false will remove the line from the chart
            // You can also set a color as value that will be used in the chart
            'datasets' => [
                'active processes' => '#9333ea',
                'total processes' => 'rgba(147,51,234,0.5)',
                'idle processes' => '#eab308',
                'listen queue' => '#e11d48',
            ],
        ],
    ]
]

Ensure you’re running the pulse:check command.

Add to your dashboard

Integrate the card into your Pulse dashboard by publish the vendor view.
and then modifying the dashboard.blade.php file:

<x-pulse>
    <livewire:pulse.servers cols="full" />
    
+ <livewire:fpm cols="full" />

    <livewire:pulse.usage cols="4" rows="2" />

    <livewire:pulse.queues cols="4" />

    <livewire:pulse.cache cols="4" />

    <livewire:pulse.slow-queries cols="8" />

    <livewire:pulse.exceptions cols="6" />

    <livewire:pulse.slow-requests cols="6" />

    <livewire:pulse.slow-jobs cols="6" />

    <livewire:pulse.slow-outgoing-requests cols="6" />

</x-pulse>

And that’s it! Enjoy enhanced visibility into your PHP FPM status on your Pulse dashboard.

Laravel News Links

How to Encrypt and Decrypt Model Data Using Casts in Laravel

https://laracoding.com/wp-content/uploads/2023/12/laravel-eloquent-encryption-cast-type.png

Using the Eloquent ‘encrypted’ cast type, you can instruct Laravel to encrypt specific attributes before storing them in the database. Later, when accessed through Eloquent, the data is automatically decrypted for your application to use.

Encrypting fields in a database enhances security by scrambling sensitive data. This measure shields information like emails, addresses, and phone numbers, preventing unauthorized access and maintaining confidentiality even if data is exposed.

In this guide you’ll learn to use Eloquent’s built-in ‘encrypted’ cast to encrypt sensitive data within an ‘Employee’ model to ensure personal data is stored securely.

Important note: Encryption and decryption in Laravel are tied to the APP_KEY found in the .env file. This key is generated during the installation and should remain unchanged. Avoid running ‘php artisan key:generate‘ on your production server. Generating a new APP_KEY will render any encrypted data irretrievable.

While keeping that in mind. Let’s get started and apply encryption!

Step 1: Create a Laravel Project

Begin by creating a new Laravel project if you haven’t done so already. Open your terminal and run:

composer create-project laravel/laravel model-encrypt
cd model-encrypt

Step 2: Add Database Credentials to the .env file

Open the .env file in your project and add the database credentials you wish to use:

.env

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=your-db
DB_USERNAME=your-db-user
DB_PASSWORD=your-db-password

Step 3: Create a Model and Migration

Begin by generating an Employee model and its corresponding migration using Artisan commands:

php artisan make:model Employee -m

Step 4: Add Migration Code

Open the generated migration file and add the code below to define the table and its columns, including those that we will apply encryption to later on.

As stated in the documentation on Eloquent encrypted casting all columns that will be encrypted need to be of type ‘text’ or larger, so make sure you use the correct type in your migration!

database/migrations/2023_12_10_172859_create_employees_table.php

<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
 public function up(): void
 {
 Schema::create('employees', function (Blueprint $table) {
 $table->id();
 $table->string('name'); // The 'name' column which we won't encrypt
 $table->text('email'); // The 'email' column, which we will encrypt
 $table->text('phone'); // The 'phone' column, which we will encrypt
 $table->text('address'); // The 'address' column, which we will encrypt
 // Other columns...
 $table->timestamps();
 });
 }

 public function down(): void
 {
 Schema::dropIfExists('employees');
 }
};

Step 5: Run the Migration

Run the migration to create the ‘employees’ table:

Step 6: Add encrypted casts to Model

Open the Employee model and add the code below to specify the attributes to be encrypted using the $casts property. We’ll also define a fillable array to make sure fields will support mass assignment to ease creation of models with data:

app/Models/Employee.php

<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Employee extends Model
{
 protected $casts = [
 'email' => 'encrypted',
 'phone' => 'encrypted',
 'address' => 'encrypted',
 // Other sensitive attributes...
 ];

 protected $fillable = [
 'name',
 'email',
 'phone',
 'address',
 ];

 // Other model configurations...
}

Step 7: Saving and Retrieving Encrypted Data

Once configured, saving and retrieving data from the encrypted attributes remains unchanged. Eloquent will automatically handle the encryption and decryption processes.

To test this out I like to use Laravel tinker. To follow along open Tinker by running:

Then paste the following code:

use App\Models\Employee;

$employee = Employee::create([
 'name' => 'Paul Atreides',
 'email' => 'paul@arrakis.com', // Data is encrypted before storing
 'phone' => '123-456-7890', // Encrypted before storing
 'address' => 'The Keep 12', // Encrypted before storing
]);

echo $employee->email; // Automatically decrypted
echo $employee->phone; // Automatically decrypted
echo $employee->address; // Automatically decrypted

The output shows the Laravel Eloquent was able to decrypt the contents properly:

> echo $employee->email;
paul@arrakis.com⏎
> echo $employee->phone;
123-456-7890⏎
> echo $employee->address;
The Keep 12⏎

If we view the contents in our database we can verify that the sensitive data in email, phone and address is in fact encrypted:

Screenshot of HeidiSQL Showing Encrypted Data in employees Table
Screenshot of HeidiSQL Showing Encrypted Data in employees Table

Conclusion

By using Laravel Eloquent’s built-in cast “encrypted” we can easily add a layer of security that applies encryption to sensitive data.

In our example we learned how to encrypt sensitive data of employee’s like email, address and phone and demonstrated how the application can still use them.

Now you can apply this technique to your own applications and ensure the privacy of your users is up to todays standards. Happy coding!

References

Laravel News Links

How to Use Percona Toolkit’s pt-table-sync for Replica Tables With Triggers in MySQL

https://www.percona.com/blog/wp-content/uploads/2023/12/pt-table-sync-for-Replica-Tables-With-Triggers-200×119.jpgpt-table-sync for Replica Tables With Triggers

In Percona Managed Services, we manage Percona for MySQL, Community MySQL, and MariaDB. Sometimes, the replica server might have replication errors, and the replica might be out of sync with the primary. In this case, we can use Percona Toolkit’s pt-table-checksum and pt-table-sync to check the data drift between primary and replica servers and make the replica in sync with the primary. This blog gives you some ideas on using pt-table-sync for replica tables with triggers.

In my lab, we have two test nodes with replication setup, and both servers will have Debian 11 and Percona Server for MySQL 8.0.33 (with Percona Toolkit) installed.

The PRIMARY server is deb11m8 (IP: 192.168.56.188 ), and the REPLICA server name is deb11m8s (IP: 192.168.56.189).

1. Creating the test tables and the AFTER INSERT trigger

Create the below table and trigger on PRIMARY, and it will replicate down to REPLICA. We have two tables: test_tab and test_tab_log. When a new row is inserted into test_tab, the trigger will fire and put the data and the user who did the insert into the test_tab_log table.

Create database testdb;
Use testdb; 
Create table test_tab (id bigint NOT NULL , test_data varchar(50)  NOT NULL ,op_time TIMESTAMP  NOT NULL , PRIMARY KEY (id,op_time));              
Create table test_tab_log (id bigint  NOT NULL , test_data varchar(50)  NOT NULL ,op_user varchar(60)  NOT NULL  ,op_time TIMESTAMP  NOT NULL , PRIMARY KEY (id,op_time)); 
DELIMITER $$
CREATE DEFINER=`larry`@`%` TRIGGER after_test_tab_insert  AFTER INSERT
ON test_tab FOR EACH ROW
BEGIN
   INSERT INTO test_tab_log(id,test_data,op_user,op_time) VALUES(new.id, NEW.test_data, USER(),NOW());
END$$
DELIMITER ;

2. Let’s fill in some test data

We do an insert as a root user. You can see that after data is inserted, the trigger fires as expected.

mysql> insert into test_tab (id,test_data,op_time) values(1,'lt1',now());
Query OK, 1 row affected (0.01 sec)
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
+----+-----------+---------------------+
1 row in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
+----+-----------+----------------+---------------------+
1 row in set (0.00 sec)
We Insert another row insert into test_tab (id,test_data,op_time) values(2,'lt2',now());
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
|  2 | lt2       | 2023-11-26 10:01:30 |
+----+-----------+---------------------+
2 rows in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
+----+-----------+----------------+---------------------+
2 rows in set (0.00 sec)

3. Let’s get percona.dsns ready for pt-table-checksum and pt-table-sync

CREATE TABLE percona.dsns (`id` int(11) NOT NULL AUTO_INCREMENT,`parent_id` int(11) DEFAULT NULL, `dsn` varchar(255) NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO percona.dsns (dsn) VALUES ('h=192.168.56.190');

4. Simulate the out of sync on 192.168.56.190 by removing one row (id=1) in test_tab

mysql> use testdb;
Database changed
mysql> set sql_log_bin=0;
Query OK, 0 rows affected (0.00 sec)
mysql> delete from test_tab where id=1;
Query OK, 1 row affected (0.00 sec)
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  2 | lt2       | 2023-11-26 10:01:30 |
+----+-----------+---------------------+
1 row in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
+----+-----------+----------------+---------------------+
2 rows in set (0.00 sec)

5. Run pt-table-checksum to report the difference

root@deb11m8:~/test_pt_trigger# pt-table-checksum h=192.168.56.189 --port=3306 --no-check-binlog-format 
--no-check-replication-filters --replicate percona.checksums_test_tab  
--recursion-method=dsn=D=percona,t=dsns 
--tables testdb.test_tab 
--max-load Threads_running=50 
--max-lag=10 --pause-file /tmp/checksums_test_tab
Checking if all tables can be checksummed ...
Starting checksum ...
            TS ERRORS  DIFFS     ROWS  DIFF_ROWS  CHUNKS SKIPPED    TIME TABLE
11-26T10:02:58      0      1        2          1       1       0   4.148 testdb.test_tab
root@deb11m8:~/test_pt_trigger#

on REPLICA, deb11m8s, we can see the checksum reports the difference.

mysql> SELECT db, tbl, SUM(this_cnt) AS total_rows, COUNT(*) AS chunks
    -> FROM percona.checksums_test_tab
    -> WHERE (
    -> master_cnt <> this_cnt
    -> OR master_crc <> this_crc
    -> OR ISNULL(master_crc) <> ISNULL(this_crc))
    -> GROUP BY db, tbl;
+--------+----------+------------+--------+
| db     | tbl      | total_rows | chunks |
+--------+----------+------------+--------+
| testdb | test_tab |          1 |      1 |
+--------+----------+------------+--------+
1 row in set (0.00 sec)

6. Let’s try pt-table-sync to fix it; we will run pt-table-sync under user ‘larry’@’%’

Pt-table-sync says Triggers are defined on the table and will not continue to fix it.

root@deb11m8:~/test_pt_trigger# pt-table-sync h=192.168.56.190,P=3306 --sync-to-master 
--replicate percona.checksums_test_tab 
--tables=testdb.test_tab 
--verbose --print
# Syncing via replication P=3306,h=192.168.56.190
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
Triggers are defined on the table at /usr/bin/pt-table-sync line 11306. while doing testdb.test_tab on 192.168.56.190
#      0       0      0      0 0         10:03:31 10:03:31 1    testdb.test_tab

Pt-table-sync has an option –[no]check-triggers- to that will skip trigger checking. The print result is good.

root@deb11m8:~/test_pt_trigger# pt-table-sync --user=larry --ask-pass  h=192.168.56.190,P=3306 --sync-to-master --nocheck-triggers 
--replicate percona.checksums_test_tab 
--tables=testdb.test_tab 
--verbose --print
# Syncing via replication P=3306,h=192.168.56.190
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
REPLACE INTO `testdb`.`test_tab`(`id`, `test_data`, `op_time`) VALUES ('1', 'lt1', '2023-11-26 09:59:19') /*percona-toolkit src_db:testdb src_tbl:test_tab src_dsn:P=3306,h=192.168.56.189 dst_db:testdb dst_tbl:test_tab dst_dsn:P=3306,h=192.168.56.190 lock:1 transaction:1 changing_src:percona.checksums_test_tab replicate:percona.checksums_test_tab bidirectional:0 pid:4169 user:root host:deb11m8*/;
#      0       1      0      0 Nibble    10:03:54 10:03:55 2    testdb.test_tab

When we run pt-table-sync with –execute under user ‘larry’@’%’:

root@deb11m8:~/test_pt_trigger# pt-table-sync --user=larry --ask-pass  h=192.168.56.190,P=3306 --sync-to-master --nocheck-triggers 
--replicate percona.checksums_test_tab 
--tables=testdb.test_tab 
--verbose --execute
# Syncing via replication P=3306,h=192.168.56.190
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       1      0      0 Nibble    10:05:26 10:05:26 2    testdb.test_tab
-------PRIMARY -------
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
|  2 | lt2       | 2023-11-26 10:01:30 |
+----+-----------+---------------------+
2 rows in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  1 | lt1       | larry@deb11m8  | 2023-11-26 10:05:26 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
+----+-----------+----------------+---------------------+
3 rows in set (0.00 sec)
-----REPLICA
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
|  2 | lt2       | 2023-11-26 10:01:30 |
+----+-----------+---------------------+
2 rows in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  1 | lt1       |                | 2023-11-26 10:05:26 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
+----+-----------+----------------+---------------------+
3 rows in set (0.00 sec)

We can see a new row inserted into the test_tab_log table. The reason is that the trigger fired on the primary and replicated to the REPLICA when we ran pt-table-sync.

7. If we do not want that to happen (new row inserted  in test_tab_log table)

Option 1: Do the pt-table-checksum/pt-table-sync for the test_tab_log  table again. This might fix the issue.

Option 2: We might need to do some work on the trigger like below (or there might be another better way).

Let‘s recreate the trigger as below; the trigger will check if it’s run by ‘larry’.

Drop trigger  after_test_tab_insert;
DELIMITER $$
CREATE  DEFINER=`larry`@`%`  TRIGGER after_test_tab_insert
AFTER INSERT
ON test_tab FOR EACH ROW
BEGIN
   IF left(USER(),5) <> 'larry' and trim(left(USER(),5)) <>'' THEN
     INSERT INTO test_tab_log(id,test_data, op_user,op_time)
     VALUES(new.id, NEW.test_data, USER(),NOW());
   END IF;
END$$
DELIMITER ;

And restore the data to its original out-of-sync state.

The PRIMARY

mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
|  2 | lt2       | 2023-11-26 10:01:30 |
+----+-----------+---------------------+
2 rows in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
+----+-----------+----------------+---------------------+

The REPLICA

mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  2 | lt2       | 2023-11-26 10:01:30 |
+----+-----------+---------------------+
1 row in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
+----+-----------+----------------+---------------------+
2 rows in set (0.00 sec)

Run pt-table-sync under user ‘larry’@’%’.  

root@deb11m8s:~# pt-table-sync --user=larry --ask-pass h=192.168.56.190,P=3306 --sync-to-master --nocheck-triggers --replicate percona.checksums_test_tab --tables=testdb.test_tab --verbose --execute
Enter password for 192.168.56.190: 
# Syncing via replication P=3306,h=192.168.56.190,p=...,u=larry
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       1      0      0 Nibble    21:02:26 21:02:27 2    testdb.test_tab

We can use pt-table-sync, which will fix the data drift for us, and the trigger will not fire when pt-table-sync is run under user larry.

root@deb11m8s:~# pt-table-sync --user=larry --ask-pass h=192.168.56.190,P=3306 --sync-to-master --nocheck-triggers --replicate percona.checksums_test_tab --tables=testdb.test_tab --verbose --execute
Enter password for 192.168.56.190: 
# Syncing via replication P=3306,h=192.168.56.190,p=...,u=larry
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       1      0      0 Nibble    21:02:26 21:02:27 2    testdb.test_tab

—The PRIMARY
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
|  2 | lt2       | 2023-11-26 10:01:30 |
+----+-----------+---------------------+
2 rows in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
+----+-----------+----------------+---------------------+
2 rows in set (0.00 sec)

—The REPLICA
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
|  2 | lt2       | 2023-11-26 10:01:30 |
+----+-----------+---------------------+
1 row in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
+----+-----------+----------------+---------------------+
2 rows in set (0.00 sec)

8. If we still insert other data into the table test_tab under another user (e.g. root@localhost), the trigger will still fire

mysql> select user();
+----------------+
| user()         |
+----------------+
| root@localhost |
+----------------+
1 row in set (0.00 sec)
mysql> insert into test_tab (id,test_data,op_time) values(3,'lt3',now());
Query OK, 1 row affected (0.01 sec)
-— The PRIMARY 
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
|  2 | lt2       | 2023-11-26 10:01:30 |
|  3 | lt3       | 2023-11-26 21:04:26 |
+----+-----------+---------------------+
3 rows in set (0.00 sec)
+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
|  3 | lt3       | root@localhost | 2023-11-26 21:04:26 |
+----+-----------+----------------+---------------------+
3 rows in set (0.00 sec)

— The REPLICA 
mysql> select * from test_tab; select * from test_tab_log;
+----+-----------+---------------------+
| id | test_data | op_time             |
+----+-----------+---------------------+
|  1 | lt1       | 2023-11-26 09:59:19 |
|  2 | lt2       | 2023-11-26 10:01:30 |
|  3 | lt3       | 2023-11-26 21:04:26 |
+----+-----------+---------------------+
3 rows in set (0.00 sec)

+----+-----------+----------------+---------------------+
| id | test_data | op_user        | op_time             |
+----+-----------+----------------+---------------------+
|  1 | lt1       | root@localhost | 2023-11-26 09:59:19 |
|  2 | lt2       | root@localhost | 2023-11-26 10:01:30 |
|  3 | lt3       | root@localhost | 2023-11-26 21:04:26 |
+----+-----------+----------------+---------------------+
3 rows in set (0.00 sec)

In our test case, we just cover one AFTER INSERT trigger. In a live production system, there might be more complex scenarios (e.g. a lot of different types of triggers defined on the table you are going to do pt-table-sync, auto-increment value, the table has foreign key constraints, etc.). It would be better to test on a test environment before you go to production and make sure you have a valid backup before making a system change.

I hope this will give you some ideas on pt-table-sync on a table with triggers.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Percona Database Performance Blog

Dune Part 2’s Epic New Trailer Teases a War Across Generations

https://i.kinja-img.com/image/upload/c_fill,h_675,pg_1,q_80,w_1200/6a12586b70dc3414af93d6de525cd139.png

He’s got the eyes.
Screenshot: Warner Bros.

It’s certainly disappointing that Dune: Part Two isn’t in theaters right now but sometimes waiting is the best part. Case in point, today we’ve been graced with a brand new trailer, and it’ll make you somehow even more excited for the sequel than you already are. Which is really saying something.

‘Even AI Rappers are Harassed by Police’ | AI Unlocked

Dune: Part Two stars Timothée Chalamet, Zendaya, Rebecca Ferguson, Josh Brolin, Austin Butler, Florence Pugh, Dave Bautista, Christopher Walkenand many more. It opens March 1 and here’s the new trailer.

Dune: Part Two | Official Trailer 3

[Editor’s Note: This article is part of the developing story. The information cited on this page may change as the breaking story unfolds. Our writers and editors will be updating this article continuously as new information is released. Please check this page again in a few minutes to see the latest updates to the story. Alternatively, consider bookmarking this page or sign up for our newsletter to get the most up-to-date information regarding this topic.]

Read more from io9:

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Gizmodo

Creating a custom Laravel Pulse card

https://d8nrpaglj2m0a.cloudfront.net/0e3d71bd-1b15-4fd4-b522-1272db7b946e/images/articles/og-pulse.jpg

Laravel Pulse is a lightweight application monitoring tool for Laravel. It was just
released today and I took a bit of time to create a custom card to
show outdated composer dependencies.

This is what the card looks like right now:

I was surprised at how easy the card was to create. Pulse has all of the infrastructure built out for:

  • storing data in response to events or on a schedule
  • retrieving your data back, aggregated or not
  • rendering your data into a view.

The hooks are all very well thought through.

There is no official documentation for custom cards yet, so much of this is subject to change. Everything I’m telling
you here I learned through diving into the source code.

Recording data

The first step is to create a Recorder that will record the data you’re looking to monitor. If you
open config/pulse.php you’ll see a list of recorders:

/*

|--------------------------------------------------------------------------

| Pulse Recorders

|--------------------------------------------------------------------------

|

| The following array lists the "recorders" that will be registered with

| Pulse, along with their configuration. Recorders gather application

| event data from requests and tasks to pass to your ingest driver.

|

*/

'recorders' => [

Recorders\Servers::class => [

'server_name' => env('PULSE_SERVER_NAME', gethostname()),

'directories' => explode(':', env('PULSE_SERVER_DIRECTORIES', '/')),

],

 

// more recorders ...

]

The recorders listen for application events. Pulse emits a SharedBeat event if your recorder needs to run on an
interval instead of in response to an application event.

For example, the Servers recorder records server stats every 15 seconds in response to the SharedBeat event:

class Servers

{

public string $listen = SharedBeat::class;

 

public function record(SharedBeat $event): void

{

if ($event->time->second % 15 !== 0) {

return;

}

 

// Record server stats...

}

}

But the Queue recorder listens for specific application events:

class Queues

{

public array $listen = [

JobReleasedAfterException::class,

JobFailed::class,

JobProcessed::class,

JobProcessing::class,

JobQueued::class,

];

 

public function record(

JobReleasedAfterException|JobFailed|JobProcessed|JobProcessing|JobQueued $event

): void

{

// Record the job...

}

}

In our case, we just need to check for outdated packages once a day on a schedule, so we’ll use the SharedBeat event.

Creating the recorder

The recorder is a plain PHP class with a record method. Inside of that method you’re given one of the events to which
you’re listening. You also have access to Pulse in the constructor.

class Outdated

{

public string $listen = SharedBeat::class;

 

public function __construct(

protected Pulse $pulse,

protected Repository $config

) {

//

}

 

public function record(SharedBeat $event): void

{

//

}

 

}

The SharedBeat event has a time property on it, which we can use to decide if we want to run or not.

class Outdated

{

// ...

 

public function record(SharedBeat $event): void

{

// Only run once per day

if ($event->time !== $event->time->startOfDay()) {

return;

}

}

}

Pulse will handle invoking the record method, we just need to figure out what to do there. In our case we’re going to
run composer outdated.

class Outdated

{

// ...

 

public function record(SharedBeat $event): void

{

// Only run once per day

if ($event->time !== $event->time->startOfDay()) {

return;

}

 

// Run composer to get the outdated dependencies

$result = Process::run("composer outdated -D -f json");

 

if ($result->failed()) {

throw new RuntimeException(

'Composer outdated failed: ' . $result->errorOutput()

);

}

 

// Just make sure it's valid JSON

json_decode($result->output(), JSON_THROW_ON_ERROR);

}

}

Writing to the Pulse tables

Pulse ships with three separate tables:

  • pulse_aggregates
  • pulse_entries
  • pulse_values

There is currently no documentation, but from what I can tell the pulse_aggregates table stores pre-computed rollups
of time-series data for better performance. The entries table stores individual events, like requests or exceptions.
The values table seems to be a simple "point in time" store.

We’re going to use the values table to stash the output of composer outdated. To do this, we use the pulse->set()
method.

class Outdated

{

// ...

 

public function record(SharedBeat $event): void

{

// Only run once per day

if ($event->time !== $event->time->startOfDay()) {

return;

}

 

// Run composer to get the outdated dependencies

$result = Process::run("composer outdated -D -f json");

 

if ($result->failed()) {

throw new RuntimeException(

'Composer outdated failed: ' . $result->errorOutput()

);

}

 

// Just make sure it's valid JSON

json_decode($result->output(), JSON_THROW_ON_ERROR);

 

// Store it in one of the Pulse tables

$this->pulse->set('composer_outdated', 'result', $result->output());

}

}

Now our data is stored and will be updated once per day. Let’s move on to displaying that data!

(Note: You don’t have to create a recorder. Your card can pull data from anywhere!)

Displaying the data

Pulse is built on top of Laravel Livewire. To add a new Pulse card to your dashboard,
we’ll create a new Livewire component called ComposerOutdated.

php artisan livewire:make ComposerOutdated

 

# COMPONENT CREATED ????

# CLASS: app/Livewire/ComposerOutdated.php

# VIEW: resources/views/livewire/composer-outdated.blade.php

By default, our ComposerOutdated class extends Livewire’s Component class, but we’re going to change that to extend
Pulse’s Card class.

namespace App\Livewire;

 

use Livewire\Component;

use Laravel\Pulse\Livewire\Card;

 

class ComposerOutdated extends Component

class ComposerOutdated extends Card

{

public function render()

{

return view('livewire.composer-outdated');

}

}

To get our data back out of the Pulse data store, we can just use the Pulse facade. This is one of the things I’m
really liking about Pulse. I don’t have to add migrations, maintain tables, add new models, etc. I can just use their
data store!

class ComposerOutdated extends Card

{

public function render()

{

// Get the data out of the Pulse data store.

$packages = Pulse::values('composer_outdated', ['result'])->first();

 

$packages = $packages

? json_decode($packages->value, JSON_THROW_ON_ERROR)['installed']

: []

 

return View::make('composer-outdated', [

'packages' => $packages,

]);

}

}

Publishing the Pulse dashboard

To add our card to the Pulse dashboard, we must first publish the vendor view.

php artisan vendor:publish --tag=pulse-dashboard

Now, in our resources/views/vendor/pulse folder, we have a new dashboard.blade.php where we can add our custom card. This is what it looks like by default:

<x-pulse>

<livewire:pulse.servers cols="full" />

 

<livewire:pulse.usage cols="4" rows="2" />

 

<livewire:pulse.queues cols="4" />

 

<livewire:pulse.cache cols="4" />

 

<livewire:pulse.slow-queries cols="8" />

 

<livewire:pulse.exceptions cols="6" />

 

<livewire:pulse.slow-requests cols="6" />

 

<livewire:pulse.slow-jobs cols="6" />

 

<livewire:pulse.slow-outgoing-requests cols="6" />

</x-pulse>

Adding our custom card

We can now add our new card wherever we want!

<x-pulse>

<livewire:composer-outdated cols="1" rows="3" />

 

<livewire:pulse.servers cols="full" />

 

<livewire:pulse.usage cols="4" rows="2" />

 

<livewire:pulse.queues cols="4" />

 

<livewire:pulse.cache cols="4" />

 

<livewire:pulse.slow-queries cols="8" />

 

<livewire:pulse.exceptions cols="6" />

 

<livewire:pulse.slow-requests cols="6" />

 

<livewire:pulse.slow-jobs cols="6" />

 

<livewire:pulse.slow-outgoing-requests cols="6" />

</x-pulse>

Community site

There is a lot to learn about Pulse, and I’ll continue to post here as I do. I’m working
on builtforpulse.com to showcase Pulse-related packages and articles, so make sure you stay
tuned over there!

GitHub Package

You can see this package at github.com/aarondfrancis/pulse-outdated.

YouTube Video

Laravel News Links

Maximizing Performance of AWS RDS for MySQL with Dedicated Log Volumes

https://percona.com/blog/wp-content/uploads/2023/12/benchmark-rds-mysql-dlv3-1024×629.pngMaximizing AWS RDS for MySQL Performance

A quick configuration change may do the trick in improving the performance of your AWS RDS for MySQL instance. Here, we will discuss a notable new feature in Amazon RDS, the Dedicated Log Volume (DLV), that has been introduced to boost database performance. While this discussion primarily targets MySQL instances, the principles are also relevant to PostgreSQL and MariaDB instances.

What is a Dedicated Log Volume (DLV)?

A Dedicated Log Volume (DLV) is a specialized storage volume designed to house database transaction logs separately from the volume containing the database tables. This separation aims to streamline transaction write logging, improving efficiency and consistency. DLVs are particularly advantageous for databases with large allocated storage, high I/O per second (IOPS) requirements, or latency-sensitive workloads.

Who can benefit from DLV?

DLVs are currently supported for Provisioned IOPS (PIOPS) storage, with a fixed size of 1,000 GiB and 3,000 Provisioned IOPS. Amazon RDS extends support for DLVs across various database engines:

  • MariaDB: 10.6.7 and later v10 versions
  • MySQL: 8.0.28 and later v8 versions
  • PostgreSQL: 13.10 and later v13 versions, 14.7 and later v14 versions, and 15.2 and later v15 versions

Cost of enabling Dedicated Log Volumes (DLV) in RDS

The documentation doesn’t say much about additional charges for the Dedicated Log Volumes, but I reached out to AWS support, who responded exactly as follows: 

Please note that there are no additional costs for enabling a dedicated log volume (DLV) on Amazon RDS. By default, to enable DLV, you must be using PIOPS storage, sized at 1,000 GiB with 3,000 IOPS, and you will be priced according to the storage type. 

Are DLVs effective for your RDS instance?

Implementing dedicated mounts for components such as binlogs and datadir is a recommended standard practice. It becomes more manageable and efficient by isolating logs and data to a dedicated mount. This segregation facilitates optimized I/O operations, preventing potential bottlenecks and enhancing overall system performance. Overall, adopting this practice promotes a structured and efficient storage strategy, fostering better performance, manageability, and, ultimately, a more robust database environment.

Thus, using Dedicated Log Volumes (DLVs), though new in AWS RDS, has been one of the recommended best practices and is a welcome setup improvement for your RDS instance.

We performed a standard benchmarking test using the sysbench tool to compare the performance of a DLV instance vs a standard RDS MySQL instance, as shared in the following section.

Benchmarking AWS RDS DLV setup

Setup

2 RDS Single DB instances 1 EC2 Instance
Regular DLV Enabled Sysbench
db.m6i.2xlarge c5.2xlarge
MySQL 8.0.31 CentOS 7
8 Core / 32G 8 Core / 16G
Data Size: 32G

– Default RDS configuration was used with binlogs enabled having full ACID compliance configurations.

Benchmark results for DLV-enabled instance vs. standard instance

Write-only traffic

AWS RDS for MySQL - DLV benchmarking

Read-write traffic

AWS RDS for MySQL - DLV benchmarking

Read-only traffic

AWS RDS for MySQL - DLV benchmarking

Benchmarking analysis

  • For both read-only and read-write traffic, there is a constant improvement in the QPS counters as the number of threads increases.
  • For write-only traffic, the QPS counters match the performance of standard RDS instances for lower thread counts, though, for higher counters, there is a drastic improvement.
  • The DLV, of course, affects the WRITE operations the most, and hence, the write-only test should be given the most consideration for the comparison of the DLV configuration vs. standard RDS.

Benchmarking outcome

Based on the sysbench benchmark results in the specified environment, it is strongly advised to employ DLV for a standard RDS instance. DLV demonstrates superior performance across most sysbench workloads, particularly showcasing notable enhancements in write-intensive scenarios.

Implementation considerations

When opting for DLVs, it’s crucial to be aware of the following considerations:

  1. DLV activation requires a reboot: After modifying the DLV setting for a DB instance, a reboot is mandatory for the changes to take effect.
  2. Recommended for larger configurations: While DLVs offer advantages across various scenarios, they are particularly recommended for database configurations of five TiB or greater. This recommendation underscores DLV’s effectiveness in handling substantial storage volumes.
  3. Benchmark and test: It is always recommended to test and review the performance of your application traffic rather than solely depending on standard benchmarking dependent on synthetic load.

DLV in Multi-AZ deployments

Amazon RDS seamlessly integrates DLVs with Multi-AZ deployments. Whether you’re modifying an existing Multi-AZ instance or creating a new one, DLVs are automatically created for both the primary and secondary instances. This ensures that the advantages of DLV extend to enhanced availability and reliability in Multi-AZ configurations.

DLV with read replicas

DLV support extends to read replicas. If the primary DB instance has DLV enabled, all read replicas created after DLV activation inherit this feature. However, it’s important to note that read replicas created before DLV activation will not have it enabled by default. Explicit modification is required for pre-existing read replicas to leverage DLV benefits.

Conclusion

Dedicated Log Volumes have emerged as a strong option for optimizing Amazon RDS performance. By segregating transaction logs and harnessing the power of dedicated storage, DLVs contribute to enhanced efficiency and consistency. Integrating DLVs into your database strategy will help you toward your efforts in achieving peak performance and reliability.

How Percona can help

Percona is a trusted partner for many industry-leading organizations across the globe that rely on us for help in fully utilizing their AWS RDS environment. Here’s how Percona can enhance your AWS RDS experience:

Expert configuration: RDS works well out of the box, but having Percona’s expertise ensures optimal performance. Our consultants will configure your AWS RDS instances for the best possible performance, ensuring minimal TCO.

Decades of experience: Our consultants bring decades of experience in solving complex database performance issues. They understand your goals and objectives, providing unbiased solutions for your database environment.

Blog resources: Percona experts are actively contributing to the community through knowledge sharing via forums and blogs. For example, here are two blogs on this subject:

Discover how our expert support, services, and enterprise-grade open source database software can make your business run better.

Get in touch

Percona Database Performance Blog