Laravel Real-Time performance monitoring & alerting using Inspector

https://inspector.dev/wp-content/uploads/2020/02/laravel-monitoring-cover-3.jpg

Hi, I’m Valerio software engineer, founder and CTO at Inspector.

As a product owner, I learned how an application issue could be so hard to fix, that it negatively impacts the users’ experience. Or worse blocks new potential customers during onboarding.

I publish new code changes almost every day. Unfortunately, it’s impossible to anticipate all the problems that could happen after each release. Furthermore, users don’t spend their time reporting bugs. They stop using our application if it doesn’t work as expected. Then they look for another one that better fits their needs.

In most of the projects I’ve worked on, 50% of the drawbacks for users are due to simple code mistakes. And, the more the application grows (more lines of code, new developers at work), the more difficult it is to avoid incidents.

When I started to share my idea behind Inspector, I realized that many developers know the problem. They spend too much time investigating strange behaviors inside their applications. Still, they didn’t know there was a solution to end this complexity. And do so in two minutes, avoiding customer complaints or even losing the customer.

Be the first to know if your application is in trouble before users stumble onto the problem. And drastically reduce the negative impacts on their experience. This gives you the proper foundation to run a successful users acquisition process. And allow you to increase engagement with fewer interruptions.

Lavarel Code Execution Monitoring: how it works

Inspector is a composer package to add real-time code execution monitoring to your Laravel application. It allows you to work on continuous code changes while catching bugs and bottlenecks in real-time. Before users do.

It takes less than one minute to get started. Let’s see how it works.

Install the composer package

Run the composer command below in your terminal to install the latest version:

composer require inspector-apm/inspector-laravel

Configure the Ingestion Key

Get a new Ingestion key by signing up for Inspector (https://app.inspector.dev/register) and creating a new project, it only takes a few seconds.

You’ll see installation instructions directly in the app screen:

Put the API key into your environment file:

INSPECTOR_API_KEY=xxxxxxxxxxxxxxx

Test everything is working

Execute the test command to check if your app sends data to Inspector correctly:

php artisan inspector:test

Go to https://app.inspector.dev/home to explore the demo data.


By default Inspector monitors:

  • Database interactions
  • Queued Jobs execution
  • Artisan commands
  • Email sent
  • Unhandled Exceptions

But, we turned on the light in the 50% of our app executed in the background. The next step is to monitor all execution cycles generated by user interactions.

Monitor Incoming HTTP Requests

To activate HTTP requests monitoring, you can use the WebRequestMonitoring middleware as an independent component. You are then free to decide which routes need to be monitored. Base it on your routes configuration or your monitoring preferences.

Attach the middleware in the App\Http\Kernel class:

/**
 * The application's route middleware groups.
 *
 * @var array
 */
protected $middlewareGroups = [
    'web' => [
       …,
       \Inspector\Laravel\Middleware\WebRequestMonitoring::class,
    ],    'api' => [
       …,
       \Inspector\Laravel\Middleware\WebRequestMonitoring::class,
    ]
]

Deploy your code and navigate the execution flow

The next step is to deploy your code to the production environment. Next, check out how Inspector creates a visual representation of what happens inside your code.

You will see transaction streams in your dashboard. And for each transaction, you can monitor what your application executes in real-time:

Enrich the Inspector timeline

Inspector monitors database queries, background jobs, and artisan commands by default. Still, there might be many critical statements in your code that need monitoring for performance and errors:

  • Http calls to external services
  • Function that deals with files (pdf, excel, images)

Thanks to Inspector, you can add custom segments in your timeline besides those detected by default. This allows you to measure the impact that a hidden code block has on a transaction’s performance.

Let me show you a real life example.

Suppose you have a queued job that executes some database queries and an HTTP request to an external service in the background.

Inspector detects job and database queries by default. Still, it could be interesting to monitor and measure the execution of the HTTP request to the external service. Then activate alerts if something goes wrong.

Use the inspector() helper function:

class TagUserAsActive extends Job
{
    /** 
     * @var User $user 
     */
    protected $user;

    // Monitring an external HTTP requests
    public function handle()
    {
        inspector()->addSegment(function () {            
            $this->guzzle->post('/mail-marketing/add_tag', [
                'email' => $this->user->email,
                'tag' => 'active',
            ]);        
        }, 'http');
    }
}

You will be able to identify the impact of the new segment in the transaction timeline:

Laravel Errors & Exceptions Alerting

By default, every exception fired in your Laravel app is reported. This ensures you’re alerted to unpredictable errors in real-time.

I wish that every change I make to my code could be perfect. But the reality is that this is not always the case. Some errors appear immediately after an update, while others pop up unexpectedly. It’s an unfortunate fact of life for developers. And it often also depends on problems caused by the connection between our application and other services.

Yet, Inspector makes the job easier. It automates the detection of unknown issues, so you no longer need to manually check the status of your apps. You no longer wait for reports from users. If something goes wrong, you’ll receive a notification in real-time. And after each release, you can stay informed about the impact of the latest code refactor.

If your code fires an exception, but you don’t want to block the execution, manually report the error to Inspector for personal monitoring.

try {   

    // Your dangerous code here...

} catch (GuzzleException $exception) {
   inspector()->reportException($exception)
}

Furthermore, if the HTTP request fails, you are alerted in real-time via your inbox to examine the error.

You even get access to detailed information gathered by Inspector in real time:

Conclusion

When a customer reports that something isn’t working, it forces you to drop whatever you are doing. Then start trying to reproduce the scenario, and recapture and reanalyze the logs in your toolset.

Getting an accurate picture of what’s happening can take hours or even days. Inspector can make a massive difference in efficiency, productivity, and customer happiness.

New to Inspector?

Get a monitoring environment specifically designed for software developers. Avoid any server or infrastructure configuration that many developers hate to deal with.

Thanks to Inspector, you will never need to install things at the server level or make complex configurations in your cloud infrastructure.

Inspector works with a lightweight software library that you install in your application like any other dependencies. Try the official Laravel package.

Create an account, or visit our website for more information: https://inspector.dev/laravel/

Laravel News Links

Top Gun: Maverick spoiler-free review: A worthy return to the danger zone

https://cdn.arstechnica.net/wp-content/uploads/2022/05/topgunmaverick-listing-2-760×380.png

Tom Cruise, still crazy after all these years.

Enlarge / Tom Cruise, still crazy after all these years.

Skydance Productions

As I walked out of my review screening of Top Gun: Maverick, coming down from its adrenaline-filled finale, a small part of my brain began looking for dents in the film’s armor. Maybe it’s the critic in me, but my thoughts didn’t need long to land on stuff from the original film—a plot point, a stylistic choice, a particular character—that didn’t return this time.

I chewed on those thoughts for a second, but before I could lose myself in cataloging them at length, a sensation came over me. It landed like a massive G-force blast, as if I were a jet fighter pilot attempting a seemingly impossible climb: one of great satisfaction with this sequel and admiration that this film pulled off the impossible feat of adhering to the old while doing something new.

Returning to old haunts.

Enlarge / Returning to old haunts.

Skydance Productions

The series’ predilection for steering military theater toward Hollywood-style silliness is arguably more tolerable, as tempered by a savvy script and cutting-edge stunt work. The character development hits important notes for both Pete “Maverick” Mitchell and the people in his high-speed orbit, and the film’s focused supporting cast mostly hits the mark.

Perhaps most important of all, an aging-yet-excited Tom Cruise never steps beyond his pay grade. The Top Gun star of roughly 35 years ago ruled movie theaters for different reasons than the man he is today, yet this film never sees his character Maverick betray his beloved traits or feel like an old man faking like a 20-something hotshot.

A few of the series’ moving parts have been jettisoned so many years later, and lifetime fans of the film will definitely notice them. But Top Gun‘s core tenets—incredible fighter-jet combat, enjoyable cheese, and the big-grin smile of Cruise—have returned in arguably finer form than the original.

“Don’t think, just do”

Skydance has only released traditional theater ratio footage of the film for consumption outside of theaters, so you'll have to trust me when I say that shots like this look doubly incredible inside a 16:10 ratio container.

Enlarge / Skydance has only released traditional theater ratio footage of the film for consumption outside of theaters, so you’ll have to trust me when I say that shots like this look doubly incredible inside a 16:10 ratio container.

Skydance Productions

Top Gun: Maverick has the added benefit of looking incredible on a large screen, and it’s perhaps the best IMAX format showcase of the past five years. Cruise and co. were clearly eager to take cinematic air combat to the next level, and there’s no getting around it: If you have to stitch three hospital-grade masks together or rent out a private room to feel comfortable in a public movie theater in 2022, you should consider doing so for this film.

Every major flight scene includes per-cockpit camera rigs that emphasize the added height of IMAX’s 16:10 ratio, and in these moments, flying is choreographed to let this camera angle showcase Top Gun-caliber stuff. You might see another plane in view, or vapor trails, or dumped flares dancing and billowing smoke, or a glancing shadow of the jet against the Earth’s surface because the F/A-18 Hornet is actually flying that freaking low in real life. In these moments, the actors don’t hesitate to explode with emotion, whether shrinking back or splashing their palms on the cockpit glass that extends across the entire IMAX screen.

In <em>Top Gun: Maverick</em>, all buzzing is essential—and it's always portrayed with incredible detail.

Enlarge / In Top Gun: Maverick, all buzzing is essential—and it’s always portrayed with incredible detail.

Skydance Productions

Top Gun: Maverick spends a lot of time in this perspective, so it’s good to see the stunt teams and cinematographers repeatedly strike a hot beach volleyball high-five over this collaboration. Yet the crew also makes up for lost time since the first film was made by letting external cameras, including expertly staged drones, linger over death-defying flight sequences or use wide-angle shots to show how foolishly close its stunt flyers zip past each other. The 1986 style of hard camera cuts to stitch together a shot-down bogey are done. This time, we get to watch full dogfights that lead up to each climactic kaboom.

Really, the lengths to which this film goes to favor real-life stunts over green-screen trickery is incredible. Everyone will have a different favorite on this front, but mine is a dramatic fly-by somewhat early in the film that I won’t spoil for you, except to say that it was reportedly filmed with actors taking the real-life brunt of its buzz. You’ll know it (and feel it) when you see it.

My only shoulder-shrug about the air-combat content comes from a few CGI-filled briefings. In each of these, commanding officers point at holograms and break down each step of a mission or exercise—as if Cruise insisted that this film resemble the Mission: Impossible series in one way or another. While these moments are tolerable, I felt they were explanation overkill that took time away from getting the film’s cameras up into the danged skies.

Ars Technica

New FBI Report Shows Armed Citizens STOP Mass Shootings

https://www.ammoland.com/wp-content/uploads/2022/05/FBI-Report-Active-Shooter-Incidents-in-the-USA-2021-500×352.jpg

FBI Report: Active Shooter Incidents in the USA 2021
FBI Report: Active Shooter Incidents in the USA 2021

BELLEVUE, WA – -(Ammoland.com)- A newly-released FBI report on “active shooter incidents” in 2021 [embeded below] revealed four of those killers were stopped by armed private citizens, and the Second Amendment Foundation says this is strong evidence the right to keep and bear arms is as important today as it was when the Constitution was ratified more than 200 years ago.

There were 61 active shooter incidents last year, the FBI report said. All but one of the killers were males, and ranged in age from 12 to 67 years. SAF founder and Executive Vice President Alan M. Gottlieb lauded the FBI report for acknowledging the role played by legally-armed citizens in stopping some of these events.

“It is important to acknowledge these citizen first responders, and the countless lives their heroic actions saved,” Gottlieb said.

“Truly, these were good guys with guns.”

“There is one revelation in the report for 2021 that underscores the importance of right-to-carry laws,” Gottlieb noted. “According to the FBI, active shooter data shows an upward trend in such incidents. People have a right to defend themselves and their loved ones if they are caught in the middle of such mayhem. Unquestionably, in each of the four cases cited by the FBI report, lives were saved.”

According to the FBI, in addition to the four perpetrators killed by armed citizens, 30 of these violent thugs were apprehended by law enforcement, and 14 were killed by police officers. One was killed in a vehicle accident during a law enforcement pursuit, 11 others committed suicide, and one remains at large, the report notes. In 2020, the FBI report noted, citizen first responders killed two criminals in the act.

Gottlieb has co-authored several books dealing with armed self-defense by private citizens, the most recent being “Good Guys With Guns,” published by Merril Press.

“Each year,” he said, “there are tens of thousands of cases in which private citizens use firearms in self-defense. The four incidents in which the criminals were killed represent a small but significant part of this larger story. The bottom line is that our Second Amendment rights are just as relevant today as they have ever been.”

FBI Report: Active Shooter Incidents in the USA 2021


About Second Amendment Foundation

The Second Amendment Foundation (www.saf.org) is the nation’s oldest and largest tax-exempt education, research, publishing and legal action group focusing on the Constitutional right and heritage to privately own and possess firearms. Founded in 1974, The Foundation has grown to more than 750,000 members and supporters and conducts many programs designed to better inform the public about the consequences of gun control.

Second Amendment Foundation

AmmoLand.com

How to scale an agency while managing 2,000 client websites

https://www.noupe.com/wp-content/uploads/2022/05/pexels-canva-studio-3194519-964×1024.jpg

From ensuring that you hire the right people and are retaining employees, to onboarding long-term clients that will allow your business to grow, there’s no doubt that scaling any agency comes with its challenges. Once you reach a certain level in your agency, serving and managing multiple clients and their websites, things can get even more demanding. 

As a creative agency owner who has over 25 years of experience, I can sincerely say that scaling an agency while managing 2000 websites is no easy feat, but with the right know-how and tools, it is possible. 

Simplify the most important processes

When you’re managing a myriad of different elements, simplifying all areas of how your agency operates is essential. To achieve this, agencies must first begin by assessing which critical tasks are taking up the most time or require the most input from the large majority of their teams. Essentially, business owners need to strategically lower the impact that the most burdensome and important work has on the operations team. 

An agency specializing in designing websites, such as my own, will most likely realize that they need to understand their team’s strengths and design logistics to optimize the business. In my own business, we came to understand that we needed a software solution that would simplify and facilitate our agency’s ability to easily produce professional websites at a faster rate for our clients. Our thinking at the time was that if we reduced the effort and time it took to fulfill our most critical task, we could free up time and resources to onboard new clients and ultimately grow our business. 

Utilizing a low-code/no-code website building solution such as Duda helped us and will help growing agencies simplify the production and the workflow of their development, creative, and management teams. As a result, an agency’s core employees can rapidly create and finalize 10-page websites – which would normally take 20 hours to develop – within three to four hours. With up to 17 additional hours freed up, per website build, agencies that are just starting out can rapidly grow their business. More established agencies that manage many accounts will also benefit greatly from having additional hours to spare, as they can utilize these hours to manage even more clients and deliver even more products and services. For example, my agency only has one person in charge of maintaining 2,000 sites, because the website builder we use, Duda,  allows us to easily take care of updates and ensure modern websites that are constantly upgraded to the latest best practices.

Deliver a product that’s easy to use and unmatched in quality

The quality of the final product delivered to a client will greatly affect whether an agency will receive repeat business and word-of-mouth referrals. While spending money on marketing to bring in new clients is a great strategy in the short term, giving existing customers an incredible user experience and product will ensure that clients become free brand ambassadors, referring people to your business and plugging your service on social media.

Agencies that are managing a significant number of clients and want to drive high volumes of growth must utilize the most effective product support solutions to give their customers the best treatment. Pivoting to superior software, sooner, will help agencies deliver high-quality products and services. With a plethora of software solutions on the market, agencies must set aside time to investigate and test new software. Finding a solution that enhances the quality of the final product and makes the product easy to use might take time, but agencies should see this time as a necessary investment that will have good returns. 

Nurture revenue-building relationships with excellent support

Every relationship has the potential to be the key to an agency’s next big deal and growth. I’ll refer to my own agency as an example. In 2010, we started out with only eight clients. By keeping our clients happy, and with no sales or business development team, we grew to over 1,300 clients and counting. Clients who are well taken care of will reward you, and those who feel that your agency is not meeting their needs will warn their networks about your service. 

A major factor in maintaining a good relationship is the quality of support and communication they receive. When there is a request for their website to be updated, how long will it take for your agency to respond? If an average of a hundred service requests are received each week, can all requests be answered within two to three hours? Does your agency have a post-launch communication plan? These are the questions that agency owners need to ask themselves in order to assess the quality of their support. Agencies should never underestimate the power of calling clients regularly, solving their problems expeditiously, and sharing helpful information and insights without being prompted. 

Good service almost always leads to gaining a client’s trust. Once an agency has earned the trust of its clients, it is in a better position to offer additional services and will likely see clients remaining customers for a long time. While some may argue that retaining customers for a long period of time is insignificant, the reality, according to a survey conducted by conversion rate optimization experts Invesp, is that acquiring a new customer is five times more expensive than keeping an existing one. Mistreating or ignoring existing clients won’t get agencies any closer to actualizing their goal of scaling their business. 

A very important caveat is that not all clients are worth keeping. Most agencies will at some point encounter a client who cannot be satisfied, no matter what you do. To illustrate how we deal with high-stress clients in my own business, I’ll refer to a quarterly Board of Directors meeting which took in 2018. At the meeting, one of the Board members asked what our client turnover rate was, and I proudly replied: “less than one percent.” To my surprise, the entire Board was adamant that the client churn rate should be higher, as keeping difficult clients was bound to hinder the agency’s continuous growth. Today, we are able to identify which clients are worth keeping and which aren’t – a skill that all growing agencies should adopt. While we have only had to let go of about 10 to 15 clients, the shift in thinking resulted in increased productivity and, more importantly, a much better atmosphere in the workplace. No client is worth keeping if they bring unhappiness and unnecessary stress to an agency’s employees.

Quality begets quantity

Growing an agency while managing thousands of clients, while extremely challenging, is possible. Agencies that want to grow must simplify processes, deliver a high-quality service, and excel at customer support to effectively and seamlessly scale. When an agency specializes in a specific product offering, it’s critical to streamline the process of how the product is built. Quality will result in quantity: the higher the quality of the final product, the more revenue an agency will see. Furthermore, and most importantly, offering memorable and outstanding customer service will guarantee that clients spread the word and drive significant business growth.

The post How to scale an agency while managing 2,000 client websites appeared first on noupe.

noupe

Exploring Aurora Serverless V2 for MySQL

https://mydbops.files.wordpress.com/2022/05/null.png

Aurora Serverless V2 is generally available around the corner recently 21-04-22 for MySQL 8 and PostgreSQL, with promising features that overcome the V1 disadvantages. Below are those major features

Features

  • Online auto instance upsize (vertical scaling)
  • Read scaling (Supports up to 15 Read-replica)
  • Supports mixed-configuration cluster ie, the master can be normal Aurora(provisioned) and readers can be in serverlessv2 and vice versa
  • MultiAZ capability (HA)
  • Aurora global databases (DR)
  • Scaling based on memory pressure
  • Vertically Scales while SQL is running
  • Public IP allowed
  • Works with custom port
  • Compatible with Aurora version 3.02.0 ie., >= MySQL 8.0.23 (only supported)
  • Supports binlog
  • Support for RDS proxy.
  • High-cost saving

Now let’s proceed to get our hands dirty by launching the serverless-v2 for MYSQL

Launching Serverless V2

It’s time to choose the Engine & Version for launching our serverless v2

Engine type : Amazon Aurora

Edition : Amazon Aurora MySQL – Compatible edition ( Only MySQL used)

Filters : Turn ON Show versions that support ServerlessV2 ( saves time )

Version : Aurora MySQL 3.02.0 ( compatible with MySQL 8.0.23 )

Instance configuration & Availability

DB instance class : Serverless ‘Serverless v2 – new’

Capacity Range : Set based on your requirements and costing ( 1 to 64 ACUs )

Aurora capacity units(ACU) : 2GB RAM+ CPU + N/W

Availability & Durability : Create an Aurora replica

While choosing the capacity range, Minimum ACU will define the lowest capacity to which it scales down ie., 1ACU and Maximum ACU will define the maximum capacity to which it can scale up

Connectivity and Misc setting:

Choose the below settings based on your application needs

  • VPC
  • Subnet
  • Public access, (Avoid in favor of basic security)
  • VPC security group
  • Additional configuration ( Cluster group, parameter group, custom DB port, performance insight, backup config, Auto minor version upgrade, deletion protection )

To keep it short I have accepted all the defaults to proceed on to “Create database

Once you click the “create database” you can see the cluster getting created, Initially both the nodes in the cluster will be marked as “Reader instance” – don’t panic it’s quite normal.

Once the first instance becomes available, it would be promoted to “Writer” now the cluster is ready to accept the connection, post which the reader gets created in adjacent AZ, refer to the image below

Connectivity & End-point:

ServerlessV2 cluster also provides 3 end-points ie., Highly available cluster, read-only end-points and individual instance end-point

  • Cluster endpoint – This endpoint connects your application to the current primary DB instance for that Serverless v2 cluster. Your application can perform both read & write operations.
  • Readers endpoint – Serverless v2 cluster has a single built-in reader endpoint which is used only for read-only connections. This also balances connections up to 15 read-replica instances.
  • Instance endpoints – Each DB instance in a serverless v2 cluster has its own unique instance endpoint

You should always be mapping cluster and RO endpoints with applications for high availability

Monitoring:

Though Cloudwatch covers needed metrics, to get a deep & granular insight into DB behavior using PMM, I used this link for quick installation, In short for serverless I wanted to view the below

  • DB uptime, to see if DB reboots during scale-up or scale-down
  • Connection failures
  • Memory resize ( InnoDB Buffer Pool )

Here I took a T2.large machine to install & configure PMM.

Now let’s take Serverlessv2 for a spin:

The beauty of Aurora Serverless V2 is that it supports both Vertical scaling ie., auto Instance upsize as well as Horizontal scaling with read-replicas.

In the remaining portion of this blog will cover the vertical scaling feature of Serverless V2.

Vertical scaling:

With most of the clusters out there the most difficult part is upsizing the writer instance on the fly without interrupting the existing connection. Even after using proxies/DNS for failover, there would be connection failures.

I was more curious about the testing of the vertical scaling feature, since AWS claimed it to be online and without disrupting the existing connected connections, ie., while the query is running. Wow !! fingers crossed.

Come on let’s begin the test, So I decided the remove the “reader instance” first, below is the view of our cluster now.

My initial buffer pool allocation was 672MB since our Minimum (1ACU) we have 2GB out of which ¾ is allocated as InnoDB-buffer-pool

Test Case:

The test case is quite simple, am imposing an Insert only workload(Writes) using the simple load emulator tool Sysbench

Below is the command used

# sysbench /usr/share/sysbench/oltp_insert.lua --threads=8 --report-interval=1 --rate=20 --mysql-host=mydbops-serverlessv2.cluster-cw4ye4iwvr7l.ap-south-1.rds.amazonaws.com --mysql-user=mydbops --mysql-password=XxxxXxXXX --mysql-port=3306 --tables=8 --table-size=10000000 prepare

I started to load 8 tables in parallel with 8 threads and a dataset of 1M record per table

Observations and Timeline:

Scale-up:

Below are my observations during the scale-up process

  • Insert started at 03:57:40 exactly COM_INSERTS reaching 12.80/sec, Serverless was running with 672MB buffer_pool, exactly after 10 secs at 3:57:40 first Scaling process kicks in and buffer_pool memory was raised to 2GB, let’s have a closer look
  • After a Minute at 03:58:40, the second scaling process kicks in and buffer_pool size leaped to ~9G
  • I was keenly watching the uptime of MySQL for each scale-up process and also watching the thread failures, but to my surprise both were intact and memory(buffer pool) was scaling linearly at regular intervals of 60 secs and reached a max of 60GB at 04:11:40
  • The data loading got completed at 04:10:50 ( Graphical stats )

Scale Down:

  • Post the completion of Inserts in DB there was a brief period of 5min, since in production scale down has to be done in a slow and steady fashion. DB was completely idle now and connections were closed, at 04:16:40 buffer pool memory dropped from 60G to 48GB
  • Scale down process kicked in at regular intervals of 3 mins from the previous scale down operation and finally at 04:34:40 the serverless was back

Adaptive Scale-up & Down

I would say this entire scale up and scale down the process is very adaptive, intelligent, and well-organized one

  • No lag in DB performance.
  • Linear increase & decrease of resource is maintained
  • No DB reboot and Connection fails were kept at bay

Below is the complete snapshot of the buffer_pool memory scale_up & scale down process along with the INSERT throughput stats, both the process took around ~40mins

Along with the buffer_pool serverless also auto-tunes the below variables specific to MySQL

innodb_buffer_pool_size

innodb_purge_threads

table_definition_cache

table_open_cache

AWS recommends keeping this value to default in the custom Parameter group of serverlessV2

Below is the image summary of the entire scale-up and scale-down process.

AWS has nailed vertical scaling with aurora serverless, from my point of view its production though it’s in the early GA phase.

Summary:

  • The Upsize happens gradually on demand every 1 min.
  • The downsize happens gradually on the ideal load every 3 min.
  • Supports from MySQL 8.0.23
  • Untouch above said MySQL Variables on

Use Cases:

Below are some of the use cases where Aurora serverless V2 fits in perfectly

  • Applications such as gaming,retail applications, and online gambling apps wherein usage is high for a known period(say daytime or during the match ) and idle or less utilized for the other period
  • Suited for testing and developing environments
  • Multi-tenant applications where the load is unpredictable
  • Batch job processing

This is just a starting point, there are still a lot of conversations pending on the Aurora ServerlessV2 such as horizontal scaling(read scaling), Migration, parameters, DR, MutiAZ failover, and Pricing. Stay tuned here !!

Love to test this Serverless V2 on your production environment, Mydbops database engineers are happy to assist.

Planet MySQL