Automatically generate RSS feeds in a Laravel application

https://leopoletto.com/assets/images/how-to-generate-rss-feeds-in-a-laravel-application.png

One handy way of keeping users up-to-date on your content is creating an RSS feed.
It allows them to sign up using an RSS reader.
The effort to implement this feature is worth considering because
the website will have another content distribution channel.

Spatie, a well-known company by creating hundreds of good packages for Laravel.
One of them is laravel-feed.
Let’s see how it works:

Installation

The first step is to install the package in your Laravel Application:

composer require spatie/laravel-feed

Then you must publish the config file:

php artisan vendor:publish --provider="Spatie\Feed\FeedServiceProvider" --tag="feed-config"

Usage

Let’s break down the possibilities when configuring a feed.

Creating feeds

The config file has a feeds key containing an array in which each item represents a new feed, and the key is the feed name.

Let’s create a feed for our Blog Posts:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            //...
        ],
        'another-feed' => [
            //...
        ]   
    ]
];

The key blog-posts is also the name of the feed in which its value contains the configuration as an Array.
You can create more feeds if needed, but for the sake of this article, let’s focus on blog-posts.

That being said, for our model to work,
we need
to implement the interface Spatie\Feed\Feedable.
It has a signature for a public method named toFeedItem
which must return an instance of Spatie\Feed\FeedItem.

Below is an example of how to create a FeedItem object:

app/Models/BlogPost.php

use Illuminate\Database\Eloquent\Model;
use Spatie\Feed\Feedable;
use Spatie\Feed\FeedItem;

class BlogPost extends Model implements Feedable
{
    //...
    public function toFeedItem(): FeedItem
    {
        return FeedItem::create()
            ->id($this->id)
            ->title($this->title)
            ->summary($this->summary)
            ->updated($this->updated_at)
            ->link(route('blog-posts.show', $this->slug))
            ->authorName($this->author->name)
            ->authorEmail($this->author->email);
    }
}

Now we must create a class with a static method which is going to return a collection of App\Models\BlogPost objects:

app/Feed/BlogPostFeed.php

namespace App\Feed;

use App\Models\BlogPost;
use Illuminate\Database\Eloquent\Collection;

class BlogPostFeed
{
    public static function getFeedItems(): Collection
    {
        return BlogPost::all();
    } 
}

Back to our config file, the first key for our feed configuration is items,
which defines where to retrieve the collection of posts.

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems']
            //...
        ],
    ]
];

Then you have to define the URL:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            //'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems'],
            'url' => '/posts', //https://domain.com/posts
            //...
        ],
    ]
];

Register the routes using a macro feeds included in the package:

app/routes/web.php

//...
Route::feeds();  //https://domain.com/posts

If you wish to add a prefix:

app/routes/web.php

//...
Route::feeds('rss'); //https://domain.com/rss/posts

Following, you must add a title, description and language:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            //'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems'],
            //'url' => '/posts',
            'title' => 'My feed',
            'description' => 'The description of the feed.',
            'language' => 'en-US',
            //...
        ],
    ]
];

You can also define the format of the feed and the view that will render it.
The acceptable values are RSS, atom, or JSON:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            //'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems'],
            //'url' => '/posts',
            //'title' => 'My feed',
            //'description' => 'The description of the feed.',
            //'language' => 'en-US',
            'format' => 'rss',
            'view' => 'feed::rss',
            //...
        ],
    ]
];

There are a few additional options:

 /*
 * The image to display for the feed. For Atom feeds, this is displayed as
 * a banner/logo; for RSS and JSON feeds, it's displayed as an icon.
 * An empty value omits the image attribute from the feed.
 */
'image' => '',

/*
 * The mime type to be used in the <link> tag. Set to an empty string to automatically
 * determine the correct value.
 */
'type' => '',

/*
 * The content type for the feed response. Set to an empty string to automatically
 * determine the correct value.
 */
'contentType' => '',

The final result of the config file should look like below:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems'],
            'url' => '/posts',
            'title' => 'My feed',
            'description' => 'The description of the feed.',
            'language' => 'en-US',
            'format' => 'rss',
            'view' => 'feed::rss',
            'image' => '',
            'type' => '',
            'contentType' => '',
        ],
    ]
];

Automatically generate feed links

Feed readers discover a feed looking for a tag in the head section of your HTML documents:

<link rel="alternate" type="application/atom+xml" title="News" href="/rss/posts">

Add this to your <head>:

@include('feed::links')

Alternatively, use the available blade component:

<x-feed-links />

Conclusion

In this article,
you’ve learned how easy it is to add an RSS feed to your website using the laravel-feed package from Spatie.

If you have any comments,
you can share them in the discussion on Twitter.

Laravel News Links

PATH settings for Laravel

https://laravelnews.s3.amazonaws.com/images/path-featured.jpg

For Laravel development, we often find ourselves typing commands like ./vendor/bin/pest to run project-specific commands.

We don’t need to!

To help here, we can update our Mac (or Linux) $PATH variable.

What’s $PATH?

The $PATH variable sets the directories your system looks for when finding commands to run.

For example, we can type which <cmd> to find the path to any given command:

$ which git

/usr/local/bin/git

My system knew to find git in /usr/local/bin because /usr/local/bin is one directory set in my $PATH!

You can echo out your path right now:

# Output the whole path

echo $PATH

 

# For human-readability, split out each

# directory into a new line:

echo "$PATH" | tr ':' '\n'

Relative Directories in PATH

We can edit our $PATH variable to add in whatever directories we want!

One extremely handy trick is to set relative directories in your $PATH variable.

Two examples are adding ./vendor/bin and ./node_modules/.bin:

# In your ~/.zshrc, ~/.bashrc or, ~/.bash_profile or similar

# Each directory is separated by a colon

PATH=./vendor/bin:./node_modules/.bin:$PATH

Here we prepended our two new paths to the existing $PATH variable. Now, no matter what Laravel application we’re cded into, we can run pest and know we’re running ./vendor/bin/pest, phpunit to run ./vendor/bin/phpunit (and the same for any given Node command in ./node_modules/.bin).

We can also set the current directory . in our $PATH (if it’s not already set – it may be):

# In your ~/.zshrc, ~/.bashrc or, ~/.bash_profile or similar

# Each directory is separated by a colon

# Here we also set the current directory in our PATH

PATH=.:./vendor/bin:./node_modules/.bin:$PATH

This way we can type artisan instead of ./artisan or php artisan.

These are the settings I have in place in Chipper CI so users can run pest or phpunit without having to worry about where the command exists in their CI environments.

Notes

Order also matters in $PATH. When a command is being searched for, the earlier directories are searched first. The system will use the first command found – this means you can over-ride a system command by placing it in a directory earlier in $PATH. That’s why we prepend ./vendor/bin and ./node_modules/.bin into $PATH instead of append it.

You can find all locations of a command like this:

$ which -a git

 

git is /usr/local/bin/git

git is /usr/bin/git

git is /usr/local/bin/git

git is /usr/bin/git

Lastly, in all cases here, the commands should have executable permissions to work like this. This is something to keep in mind when creating your own commands, such as a custom bash script.

Laravel News

Scientists in Japan Develop Experimental Alzheimer’s Vaccine That Shows Promise in Mice

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/6c6ea977db6c4dd1d04bba37a0b2b576.jpg

Scientists in Japan may be at the start of a truly monumental accomplishment: a vaccine that can slow or delay the progression of Alzheimer’s disease. In preliminary research released this week, the vaccine appeared to reduce inflammation and other important biomarkers in the brains of mice with Alzheimer’s-like illness, while also improving their awareness. More research will be needed before this vaccine can be tested in humans, however.

Why Do People Buy into Crypto? | Gizmodo Interview

The experimental vaccine is being developed primarily by scientists from Juntendo University in Japan.

It’s intended to work by training the immune system to go after certain senescent cells, aging cells that no longer divide to make more of themselves, but instead stick around in the body. These cells aren’t necessarily harmful, and some play a vital role in healing and other life functions. But they’ve also been linked to a variety of age-related diseases, including Alzheimer’s. The vaccine specifically targets senescent cells that produce high levels of something called senescence-associated glycoprotein, or SAGP. Other research has suggested that people with Alzheimer’s tend to have brains filled with these cells in particular.

The team tested their vaccine on mice bred to have brains that develop the same sort of gradual destruction seen in humans with Alzheimer’s. This damage is thought to be fueled by the accumulation of a misfolded form of amyloid-beta, a protein. The mice were divided into two groups, with only one group given the actual vaccine.

In the brains of the vaccinated mice, the team found signs of reduced inflammation and fewer amyloid deposits along with lower levels of SAGP-expressing cells. These mice also seemed to behave more like typical mice compared to controls. They continued to exhibit anxiety as they aged, for instance—a trait that tends to fade in people with late-stage Alzheimer’s. They also showed more awareness of their surroundings during maze tests.

The findings were presented over the weekend at the American Heart Association’s Basic Cardiovascular Sciences Scientific Sessions 2023. That means this research hasn’t been formally peer-reviewed yet, so it should be viewed with added caution. At the same time, the team’s vaccine appears to have met an important criteria that many past attempts have failed to reach.

“Earlier studies using different vaccines to treat Alzheimer’s disease in mouse models have been successful in reducing amyloid plaque deposits and inflammatory factors, however, what makes our study different is that our SAGP vaccine also altered the behavior of these mice for the better,” said lead author Chieh-Lun Hsiao, a post-doctoral fellow in the department of cardiovascular biology and medicine at Juntendo University, in a statement released by the American Heart Association.

Of course, mice studies are only the beginning of showing that an experimental drug or vaccine can possibly work as intended. It will take further studies to validate these results and to test the vaccine’s safety in humans before large-scale trials even enter the picture.

But there have been several recent, if modest, successes in Alzheimer’s treatment as of late, and other experimental candidates—including vaccines—are already in clinical trials. With any luck, these newer and upcoming therapies might one day stop Alzheimer’s from being the incurable death sentence that it currently is today.

Gizmodo

5 Free Online Games and Websites to Master Linux and the Command Line

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/07/linux-with-a-pengiun-as-the-i-letter.jpg

Learning Linux is essential for anyone working in the IT field. Linux distros are helpful to developers, system administrators, and cloud and network engineers.

Linux is popular because of its reliability and wide range of practical applications. If you want to know more about Linux, here are five websites that will help you learn it interactively. These sites have free games and exercises based on the Linux architecture and commands.

Linux Survival makes it easy to learn and master essential Linux commands. It will teach you everything you need to learn about Linux. In module one, you will learn about the Linux directory structure. You will also learn to create directories and delete files using the command line.

You can practice listing file contents, renaming directories, and locating documents. In advanced modules, you will learn to obtain user information and manage security.

At the end of each module, there’s a practice quiz to test your knowledge. With Linux Survival, you can play around with familiar data, such as animals in a zoo.

You learn how to manipulate the data using commands on the screen. You then type the commands on the interactive shell and see the results.

The interface is simple and easy for beginners as they get instructions and an interactive shell to practice. The best part is you don’t have to sign up to use the workspace. You can start learning as soon as you land on the website. But it’s recommended that you create an account to track your progress.

Terminus is a command-line game created by MIT (Massachusetts Institute of Technology). The game provides users with an interactive command-line interface to practice Linux. They provide a set of commands and instructions on how to use them.

The interface is excellent for beginners who want to learn how to interact with the command line. They provide data in files called locations that you can work with using the commands. For example, you must retrieve specific data to complete a challenge. You can also print information and change directories.

As you navigate through the directories, a picture on the terminal shows you where you are. This immerses your imagination in the game, making it fun and adventurous.

You can play Terminus without having to sign up on the website. Go ahead and explore this fun game.

Command Line Murder Mystery is a thrilling way to learn the Linux command line. With this game, you can be a police detective for a day. The game includes a fictitious police department looking to solve a murder plot. You must help them solve the murder by looking for hints and clues about the perpetrator.

In the game, you use Linux commands to navigate through folders and files, searching for clues. First, go to the project’s GitHub repository and download or clone the folder to your device.

When you open the folder labeled clmystery, you will see the files to work with. You can begin with the instructions file that guides you on how to play. They have cheat sheet files showing you Linux commands and how to use them.

If you get stuck, you can look for clues in the hint file. There’s also a solution file if you want to check whether your answer is correct. CLI Murder Mystery teaches a lot about controlling the terminal and managing its processes.

Bandit is one of the Wargames offered by the OverTheWire community. Bandit is for absolute beginners, as it helps you learn Linux by playing around with the interface.

You will learn several Linux commands while trying to solve various challenges. It helps you practice security concepts while playing fun games on the command line. As a beginner, you should start with the basics and advance to level 34.

Bandit helps you get familiar with the command line as you run the game on your device. It’s a great introduction to working with the terminal and Linux code editors and IDEs. To play, you must go to the website and obtain instructions on connecting using SSH (Secure Shell).

The game has different levels. You start at Level 0 and pass it by obtaining a password to access the next level. Each level provides instructions on what to do to finish the level. Without the passwords, you cannot access the next level of the game.

All the levels have a page on the website with commands to win the game. They also provide a detailed explanation of each command and how to use it.

Playing Bandit will ensure you have a good understanding of Linux commands and how to apply them. If you get stuck, you can reach out to their community; they are always eager to help.

With Linux Journey, you will learn everything you need to know about Linux. The site is full of resources for both beginners and advanced learners. The exercises familiarize you with terms, jargon, and phrases used in Linux distributions as well.

You start learning about the origin of Linux and its distributions. Then you explore the command line, user management processes, and Linux security.

The interface has sections with notes and instructions on how to run commands. There’s also a separate interactive shell where you can practice Linux commands. At the end of each lesson, you have a quiz to test your knowledge.

The site is free to use, and there’s no need for sign-ups. All you have to do is navigate to the site and start learning.

Why Learn Linux Using Online Games and Websites?

Linux is one of the most popular technologies used today. This is because of its versatility and numerous career opportunities in the IT field.

It introduces you to opportunities that will help you progress in your IT career. With Linux, you can contribute to open-source projects and collaborate with others. Learning Linux also introduces you to a community of Linux supporters worldwide.

MakeUseOf

7 Jupyter Notebook Tips and Tricks to Maximize Your Productivity

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/07/jupyter-notebook-essential-tips-and-tricks-featured-image-1.jpg

Key Takeaways

  • Understanding the difference between command and edit modes is essential for working with Jupyter Notebook. Each mode provides different functionalities and shortcuts.
  • Accessing and using keyboard shortcuts can save you time by avoiding a series of steps for each operation. Make sure you’re in the right mode when executing shortcuts.
  • Jupyter Notebook allows for customization through extensions or manual customization. Use extensions for easier customization or manually customize by creating a CSS file. Restart the notebook for changes to take effect.

Jupyter Notebook is a web-based interactive computing environment that you can use for data analysis and collaborative coding. It allows the integration of code, text, and visualizations into a single document. It has an extensive ecosystem of libraries available for accomplishing different tasks.

It dominates the data science world when it comes to data analysis, data preprocessing, and feature engineering. Here are some essential tips and tricks to help you make the most out of your notebook experience.

1. Difference Between Command Mode and Edit Mode

Understanding the difference between the command and edit modes is one of the fundamental aspects of working with a Jupyter Notebook. This is because each mode provides different functionalities and shortcuts.

The edit mode is indicated by a green border and is the default mode when you select a cell for editing.

In this mode, you can type and edit code within the cell. To enter edit mode, double-click on a cell or press enter when you select one.

The command mode is indicated by a blue cell border. It is also the default mode when you are not actively editing a cell.

In this mode, you can perform notebook-level operations such as creating, deleting, changing, or executing cells. To switch from edit mode to command mode press the ESc key.

2. Accessing and Using the Keyboard Shortcuts

Jupyter Notebooks has a Keyboard shortcuts dialog that helps you view all available shortcuts. To access it make sure you are in command mode. Then press the H key. A pop-up window such as the one below should appear.

Each shortcut has an explanation of what it does next to it. The commands are divided into those that you can use in command mode and edit mode. Make sure you are in the right mode when executing the respective shortcut. Using these shortcuts will help you save a lot of time as you won’t have to follow a series of steps to accomplish each operation.

3. Using Magic Commands

Magic commands provide additional functionalities that you can use for executing tasks. To use them, prefix the command with a % for line magics and two %% for cell-level magic. Instead of memorizing a few, you can access all the available magic commands by using the %lsmagic command.

On a new cell, run the %lsmagic command. This will display all the available magic commands both in the edit and command mode. To understand what each command does, run the command with a postfix question mark to get its documentation. For example, to understand what the %alias magic command does, run %alias?.

Make sure you understand the mode a command runs on before using it.

4. Customizing the Notebook

Jupyter Notebook allows for user customization if you do not like the default look. You can customize it in one of two ways. You can either customize it manually or use extensions. The easier alternative is to use extensions.

To use extensions, run the following command on a new cell. This command will install jupyter-themes, an extension that comes with predefined themes.

 !pip install jupyterthemes

Then proceed to your terminal or CMD to apply configurations. Start by listing the available themes using the code below.

 jt -l

Then use the following command to apply a theme. Replace the theme name with your desired one.

 jt -t <theme_name>

After applying the theme, restart the Jupyter Notebook for the changes to take place. The output of applying the oceans16 theme is as follows:

If you would like to restore the notebook back to default, use the following command.

 jt -r

The command reverts the Jupyter Notebook to its initial default theme.

To manually customize your notebook, follow the following steps.

Go to the directory where you installed Jupyter Notebook. Find the directory with the name .jupyter. Create a new folder inside it and name it custom. Then create a CSS file in the custom directory and name it custom.css. Finally, open the CSS file with an editor and add your CSS customization code.

After adding the code, restart your Jupyter Notebook for the changes to take effect.

5. Collaboration and Sharing

When you are coding you may want to collaborate with other developers. To achieve this in Jupyter Notebook, you can use version control such as Git. To use Git, initialize a Git repository on your project’s root directory. Then add and commit each change you make to the Jupyter Notebook to the Git repository.

Finally, share the repository with the people you want to collaborate with by pushing it to GitHub. This will allow the collaborators to clone the repository hence accessing your Jupyter Notebook files.

Widget and interactive features aid in helping you create dynamic user interfaces within your notebook.

They give you a way to interact and visualize with your data. Jupyter Notebooks support a few widgets by default. To use more widgets you need to install the ipywidgets library using the following command.

 !pip install ipywidgets

After installing, import the widgets module to use its functionalities.

 import ipywidgets as widgets

You now need to create the widget of your choice. For example, to create a slider widget use the following code:

 slider = widgets.IntSlider(min=0, max=100, value=50, description='Slider:')

Then display the slider.

 display(slider) 

The output is as follows:

You can use the slider for user input and selection of a numeric value within a specified range. There are many widgets that the library supports. To list them use the following line of code:

 dir(widgets)

Look for the widget that supports your requirements from the list.

7. Tips for Efficiency and Performance

To improve the efficiency and performance of your notebook, the following tips come in handy:

  • Limit the output and use progress indicators: This will help you avoid cluttering your notebook with excessive output. Use progress indicators to track the progress of the computation. The tqdm library can be useful for this purpose.
  • Minimize cell execution: Execute only the necessary cells to save on resources. You can achieve this by using Run All Above to run the selected cells.
  • Optimize loops and data processing: Use vectorized operations and optimized libraries. Also, avoid unnecessary loops, especially nested loops. They can impact performance. Instead, utilize built-in functions and methods available in data manipulation libraries.
  • Use cached results: If you have time-consuming computations or data loading, consider caching the results to avoid redundant calculations. Use tools like joblib or Pickle for caching.

How to Improve Your Performance as a Data Scientist

In the data science world, there are many tools that can help you increase your throughput. It can be libraries that you can install in your development environment, IDEs tailored for data analysis, or even browser extensions. Strive to research more on the available tools out there as they can help you simplify your work and save you a lot of time.

MakeUseOf

Migrating From On-Prem to RDS MySQL/Aurora? DEFINERS Is the Answer

https://www.percona.com/blog/wp-content/uploads/2023/03/ai-cloud-concept-with-robot-arm-1-200×150.jpgMigrating From On-Prem to RDS

Hello friends! If you plan to migrate your database from on-prem servers to RDS (either Aurora or MySQL RDS), you usually don’t have much choice but to do so using logical backups such as mysqldump, mysqlpump, mydumper, or similar. (Actually, you could do a physical backup with Percona XtraBackup to S3, but given that it has not been mentioned at any time which brand —MySQL, Percona Server for MySQL, or MariaDB — or which version —5.5, 5.6 or MariaDB 10.X — is the source, many of those combinations are unsupported for this strategy, so logical backup is the way to go.)

Depending on the size of the instance or the schema to be migrated, we can choose one tool or another to take advantage of the resources of the servers involved and save time.

In this blog, for the sake of simplicity, we are going to use mysqldump, and generate a single table, but the most curious thing is that we are going to create objects which have a certain DEFINER, and it must not be changed.

If you want to create the same lab, you can find it here.

Next, I leave below the list of objects to migrate (the schema is called “migration” and has the following objects):

mysql Source> SELECT *
FROM   (SELECT event_schema AS SCHEMA_NAME,
               event_name   AS OBJECT_NAME,
               definer,
               'EVENT'      AS OBJECT_TYPE
        FROM   information_schema.events
        UNION ALL
        SELECT routine_schema AS SCHEMA_NAME,
               routine_name   AS OBJECT_NAME,
               definer,
               'ROUTINE'      AS OBJECT_TYPE
        FROM   information_schema.routines
        UNION ALL
        SELECT trigger_schema AS SCHEMA_NAME,
               trigger_name   AS OBJECT_NAME,
               definer,
               'TRIGGER'      AS OBJECT_TYPE
        FROM   information_schema.triggers
        UNION ALL
        SELECT table_schema AS SCHEMA_NAME,
               table_name   AS OBJECT_NAME,
               definer,
               'VIEW'       AS OBJECT_TYPE
        FROM   information_schema.views
        UNION ALL
        SELECT table_schema AS SCHEMA_NAME,
               table_name   AS OBJECT_NAME,
               '',
               'TABLE'       AS OBJECT_TYPE
        FROM   information_schema.tables
        Where engine <> 'NULL'
) OBJECTS
WHERE  OBJECTS.SCHEMA_NAME = 'migration'
ORDER  BY 3,
          4;

+-------------+-----------------------+---------+-------------+
| SCHEMA_NAME | OBJECT_NAME           | DEFINER | OBJECT_TYPE |
+-------------+-----------------------+---------+-------------+
| migration   | persons               |         | TABLE       |
| migration   | persons_audit         |         | TABLE       |
| migration   | func_cube             | foo@%   | ROUTINE     |
| migration   | before_persons_update | foo@%   | TRIGGER     |
| migration   | v_persons             | foo@%   | VIEW        |
+-------------+-----------------------+---------+-------------+

5 rows in set (0.01 sec)

That’s right, that’s all we got.

The classic command that is executed for this kind of thing is usually the following:

$ mysqldump --single-transaction&nbsp; -h source-host -u percona -ps3cre3t! migration --routines --triggers --compact --add-drop-table --skip-comments > migration.sql

What is the next logical step to follow in the RDS/Aurora instance (AKA the “Destination”)?

  • Create the necessary users (you can do this using the pt-show-grants tool to extract the users and their permissions).
  • Create the schema “migration.”
  • Import the schema from the command line.

Here we must make a clarification: as you may have noticed, the objects belong to the user “foo,” who is a user of the application, and it is very likely that for security reasons, the client or the interested party does not provide us with the password.

Therefore, as DBAs, we will use the user with all the permissions that AWS allows us to have (unfortunately, AWS does not allow the SUPER permission), which will be a problem that we will show below, which we will solve with absolute certainty.

So, the command to execute the data import would be the following:

$ mysql -h <instance-endpoint> migration -u percona -ps3cre3t! -vv < migration.sql

And this is where the problems begin:

If you want to migrate to a version of RDS MySQL/Aurora 5.7 (which we don’t recommend as the EOL is October 31, 2023!!) you will probably get the following error:

--------------
DROP TABLE IF EXISTS `persons`
--------------

Query OK, 0 rows affected

--------------
/*!40101 SET @saved_cs_client     = @@character_set_client */
--------------

Query OK, 0 rows affected

--------------
/*!50503 SET character_set_client = utf8mb4 */
--------------

Query OK, 0 rows affected

--------------
CREATE TABLE `persons` (
  `PersonID` int NOT NULL,
  `LastName` varchar(255) DEFAULT NULL,
  `FirstName` varchar(255) DEFAULT NULL,
  `Address` varchar(255) DEFAULT NULL,
  `City` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`PersonID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

... lot of messages/lines

--------------
/*!50003 CREATE*/ /*!50017 DEFINER=`foo`@`%`*/ /*!50003 TRIGGER `before_persons_update` BEFORE UPDATE ON `persons` FOR EACH ROW INSERT INTO persons_audit
 SET PersonID = OLD.PersonID,
     LastName = OLD.LastName,
     City     = OLD.City,
     changedat = NOW() */
--------------

ERROR 1227 (42000) at line 23: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
Bye

By the way, do you need help upgrading to MySQL 8.0? Do you need to stay on MySQL 5.7 a bit longer? We will support you either way. Learn more

What does this error mean? Since we are not executing the import (which is nothing more and nothing less than executing a set of queries and SQL commands) with the user “foo,” who is the owner of the objects (see again the define column of the first query shown above), the user “percona” needs special permissions such as SUPER to impersonate and “become” “foo” — but as we mentioned earlier, that permission is not possible in AWS.

So?

Several options are possible; we will list some of them

  • Edit the migration.sql file, and in each definition that there is a DEFINER other than percona, replace it with percona or directly eliminate the DEFINER clause. Pros: it works. Cons: Objects will be executed with the user’s security context “percona” which is not only dangerous but also wrong.
  • Apply the solution that my colleague Sveta proposes here, but you must use mysqlpump. Even so, the migrated objects remain with the DEFINER with which they have been imported.
  • As a last resort, request the password of the user “foo,” which is not always possible.

As you will see, the solution is not simple. I would say complex but not impossible.

Let’s see what happens if the RDS/Aurora version is from the MySQL 8 family. Using the same command to perform the import, this is the output:

--------------
DROP TABLE IF EXISTS `persons`
--------------

Query OK, 0 rows affected

--------------
/*!40101 SET @saved_cs_client     = @@character_set_client */
--------------

Query OK, 0 rows affected

--------------
/*!50503 SET character_set_client = utf8mb4 */
--------------

Query OK, 0 rows affected

--------------
CREATE TABLE `persons` (
  `PersonID` int NOT NULL,
  `LastName` varchar(255) DEFAULT NULL,
  `FirstName` varchar(255) DEFAULT NULL,
  `Address` varchar(255) DEFAULT NULL,
  `City` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`PersonID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
--------------

Query OK, 0 rows affected

... lot of messages/lines

--------------
/*!50003 CREATE*/ /*!50017 DEFINER=`foo`@`%`*/ /*!50003 TRIGGER `before_persons_update` BEFORE UPDATE ON `persons` FOR EACH ROW INSERT INTO persons_audit
 SET PersonID = OLD.PersonID,
     LastName = OLD.LastName,
     City     = OLD.City,
     changedat = NOW() */
--------------

ERROR 1227 (42000) at line 23: Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation

Oops! A different message appeared, saying something like, “You need (at least one of) SUPER or SET_USER_ID privileges for this operation.”

Therefore, all we have to do now is assign the following permission to the “percona” user:

mysql Destination> GRANT SET_USER_ID ON *.* TO 'percona';

And bingo! The import finishes without problems. I am going to show you some of the commands that would have continued to fail and worked.

--------------
/*!50003 CREATE*/ /*!50017 DEFINER=`foo`@`%`*/ /*!50003 TRIGGER `before_persons_update` BEFORE UPDATE ON `persons` FOR EACH ROW INSERT INTO persons_audit
 SET PersonID = OLD.PersonID,
     LastName = OLD.LastName,
     City     = OLD.City,
     changedat = NOW() */
--------------

Query OK, 0 rows affected

--------------
CREATE DEFINER=`foo`@`%` FUNCTION `func_cube`(num INT) RETURNS int
    DETERMINISTIC
begin   DECLARE totalcube INT;    SET totalcube = num * num * num;    RETURN totalcube; end
--------------

Query OK, 0 rows affected

Besides that, the objects belong to the user they correspond to (I mean, the DEFINER, the security context).

mysql Destination> SELECT *
FROM   (SELECT event_schema AS SCHEMA_NAME,
               event_name   AS OBJECT_NAME,
               definer,
               'EVENT'      AS OBJECT_TYPE
        FROM   information_schema.events
        UNION ALL
        SELECT routine_schema AS SCHEMA_NAME,
               routine_name   AS OBJECT_NAME,
               definer,
               'ROUTINE'      AS OBJECT_TYPE
        FROM   information_schema.routines
        UNION ALL
        SELECT trigger_schema AS SCHEMA_NAME,
               trigger_name   AS OBJECT_NAME,
               definer,
               'TRIGGER'      AS OBJECT_TYPE
        FROM   information_schema.triggers
        UNION ALL
        SELECT table_schema AS SCHEMA_NAME,
               table_name   AS OBJECT_NAME,
               definer,
               'VIEW'       AS OBJECT_TYPE
        FROM   information_schema.views
        UNION ALL
        SELECT table_schema AS SCHEMA_NAME,
               table_name   AS OBJECT_NAME,
               '',
               'TABLE'       AS OBJECT_TYPE
        FROM   information_schema.tables
        Where engine <> 'NULL'
) OBJECTS
WHERE  OBJECTS.SCHEMA_NAME = 'migration'
ORDER  BY 3,
          4;
+-------------+-----------------------+---------+-------------+
| SCHEMA_NAME | OBJECT_NAME           | DEFINER | OBJECT_TYPE |
+-------------+-----------------------+---------+-------------+
| migration   | persons               |         | TABLE       |
| migration   | persons_audit         |         | TABLE       |
| migration   | func_cube             | foo@%   | ROUTINE     |
| migration   | before_persons_update | foo@%   | TRIGGER     |
| migration   | v_persons             | foo@%   | VIEW        |
+-------------+-----------------------+---------+-------------+
5 rows in set (0.01 sec)

Conclusion

As you can see, there are no more excuses. It is necessary to migrate to MySQL 8. These kinds of small details help make it possible more easily.

A migration of this type is usually always problematic; it requires several iterations in a test environment until everything works really well, and everything can still fail. Now my dear reader, knowing that MySQL 8 solves this problem (as of version 8.0.22), I ask you, what are you waiting for to migrate?

Of course, these kinds of migrations can be complex. But Percona is at your service, and as such, I share Upgrading to MySQL 8: Tools That Can Help from my colleague Arunjith that can guide you so that the necessary migration reaches a good destination.

And remember, you always have the chance to contact us and ask for assistance with any migration.  You can also learn how Percona experts can help you migrate to Percona Server for MySQL seamlessly:

 

Upgrading to MySQL 8.0 with Percona

 

I hope you enjoyed the blog, and see you in the next one!

Percona Database Performance Blog