Laravel Vapor – Serverless Deployment Solution For Laravel Apps

https://q8q7r7w8.rocketcdn.me/wp-content/uploads/2023/07/IMG_2875.png

laravel vapor

Laravel Vapor is another one of Laravel products that we deeply love. While we don’t actively use it within the team due to our significant DevOps capacity that works on infrastructure as code basis, it is an excellent tool if you are a solo developer with limited DevOps expertise. Essentially, Vapor is a serverless deployment platform designed specifically for Laravel applications. What makes Laravel Vapor a good choice is its simplicity. It seamlessly integrates with Amazon Web Services (AWS), tapping into their serverless technologies like AWS Lambda, API Gateway, and more. 

One of the key benefits of using Laravel Vapor is its ability to eliminate traditional server management tasks. Developers no longer need to worry about provisioning servers, configuring load balancers, or managing scaling rules. Vapor handles all these aspects behind the scenes, allowing developers to focus on writing code and building features.

Deploying the application with Vapor is as easy as waving a wand. A simple Laravel Vapor CLI (Command-Line Interface) tool allows developers to deploy their projects in seconds. It takes care of packaging your code, managing resources, and updating your application environment.

Now let’s talk more about how to get started with Laravel Vapor. We’ll explore its best features, the common challenges you might face while using Vapor and some handy tips for overcoming them.

Setting up a Laravel Project for Vapor

To get started with Laravel Vapor, you’ll first need a Laravel project. If you don’t have one yet, create a new Laravel project or use an existing one. Make sure you have Laravel and Composer installed on your local machine.

Once your Laravel project is ready, the next step is installing the Laravel Vapor Composer package. Open your terminal or command prompt, navigate to your project’s directory, and run the following command:

composer require laravel/vapor

This will install the Laravel Vapor package and its dependencies into your project.

Configuring and Deploying the Application

With Laravel Vapor installed, you need to configure your project for deployment. Laravel Vapor utilizes a vapor.yml configuration file to specify deployment settings and other details. 

To generate the vapor.yml file, you should run the following command in your terminal or command prompt:

php artisan vapor:install

This command will create the vapor.yml file in the root directory of your Laravel project.

Open the vapor.yml file and configure your AWS credentials, desired AWS region, Laravel Vapor environment variables, and other deployment settings. This file allows you to customize various aspects of your Vapor deployment, such as the number of instances, memory allocation, and more.

Exploring Laravel Vapor’s Deployment Workflow

Now that your Laravel project is set up for Vapor, it’s time to deploy it. Vapor provides a straightforward deployment workflow that simplifies the process.

To deploy your application to Vapor, run the following command in your terminal or command prompt:

php artisan vapor deploy

This command will initiate the deployment process. Vapor will package your application code, upload it to AWS, and create the necessary infrastructure to run your Laravel application in a serverless environment.

During the Laravel Vapor deployment, it will display progress updates, allowing you to track the process. Once the deployment is complete, Vapor will provide you with a URL where your application is accessible.

Following these steps, you can easily set up a Laravel project for Vapor, configure its deployment settings, and deploy your application to a serverless environment. 

If you find it challenging to set up or deploy a Laravel project for Vapor, we recommend you look at the course created by a Laravel team member to gain an in-depth understanding of the platform, its usage and its features. This can serve as a valuable resource, especially for beginners, to overcome any issues.

Exploring Vapor’s Features

laravel vapor

Laravel Vapor brings a lot of powerful features to the table that make it an excellent choice for deploying and managing Laravel applications in a serverless environment. So let’s take a closer look at some of the most significant features that set Vapor apart:

  • Laravel Vapor CLI and its capabilities: The CLI serves as an invaluable assistant tool that boosts developers’ experience by providing greater flexibility and efficiency in using Laravel Vapor. This command-line interface (CLI) allows developers to interact with and manage their Vapor environments and deployments. With the Vapor CLI, you can easily deploy your applications, manage Laravel Vapor environment variables, view logs, and perform various other tasks related to your Laravel Vapor projects. 
  • Setting Up Scheduler: The fact that Vapor has a major focus on Lambda function utilization might give you an idea that it creates a lot of limitations for different Laravel components, including the scheduler. But in reality, Vapor has efficiently addressed this concern as well. The scheduler feature allows you to define and manage scheduled tasks for your application. With this feature, you can execute recurring tasks, such as database backups, sending emails, and processing queues, at specified intervals. The good thing is that the scheduler is fully integrated with the underlying AWS infrastructure, which ensures reliable execution of tasks.
  • Vapor UI with Monitoring Capabilities: Vapor provides a web-based user interface with monitoring capabilities for your applications. Through the Vapor UI, you can access and view essential metrics such as error logs, queued jobs, HTTP requests, and scheduled tasks. This visibility into the performance and behaviour of your application helps you identify issues and monitor its health way more effectively.
  • Setting up a database with Vapor: Vapor makes it straightforward to set up and manage databases for your Laravel applications. It integrates with Amazon RDS (Relational Database Service) and supports MySQL and PostgreSQL databases. You can create and configure database instances using the Laravel Vapor CLI, allowing you to easily scale your database resources as needed. Vapor takes care of handling the database infrastructure, backups, and maintenance, allowing you to focus on building your application.
  • Jumpboxes to Connect Private Database: Jumpbox is a little server that acts as a mediator that allows you to securely connect to your private databases. It provides a secure connection between your Vapor environment and the private database instances. This ensures that your Laravel Vapor database remains protected and isolated from external access while still allowing Vapor to interact with it seamlessly.
  • Metrics: Vapor provides detailed metrics and insights into the performance of your Laravel applications. These metrics can include information such as request/response times, function execution, and resource utilization. Analyzing these metrics helps you optimize your application’s performance and resource consumption.
  • Alarms: With Vapor, you can set up alarms to get notified about specific events or thresholds that exceed predefined limits. Alarms can be configured to trigger notifications via various channels, such as email or SMS. By setting up alarms, you can proactively monitor and respond to critical issues in your serverless Laravel application.

So all these features make using Laravel Vapor extremely comfortable and user-friendly. While doing certain actions from the AWS panel might be complicated and counterintuitive, Vapor simplifies the deployment process, allowing developers to set up applications without having to include DevOps or Server Administration part. So this flexibility is probably one of the greatest benefits of using Vapor.

Cost Optimization with Laravel Vapor

Another noticeable thing about Laravel Vapor is that it offers a more granular and cost-effective billing model than traditional server-based setups. Since Laravel Vapor operates on AWS Lambda, its cost is directly influenced by the volume of executed requests and their respective processing times. Therefore, if the application experiences performance issues or delays, it can increase AWS Lambda costs.

In cases where the application receives lots of traffic with more than 100k users and significant delays between request responses, choosing Vapor might not be the most cost-effective or optimal option. On the other hand, Vapor is considered to be one of the most viable alternatives available (including Laravel Forge) for smaller projects or applications that do not expect large spikes in traffic.

Limitations and Considerations

While Laravel Vapor offers many benefits, it’s essential to be aware of some potential challenges and limitations you might encounter. One limitation is that Vapor currently supports only AWS as the underlying infrastructure, so if you prefer a different cloud provider, you might need to explore laravel Vapor alternative serverless platforms.

Another consideration is that the serverless architecture may require adjustments to your application’s code and architecture. Certain Laravel features, such as long-running processes or file storage, may require modifications to align with the stateless nature of serverless environments.

Scaling considerations are also important. While Vapor handles automatic scaling for you, sudden spikes in traffic may require adjusting scaling rules and ensuring your application can handle the increased load.

To make the most of Laravel Vapor and overcome common challenges, here are some valuable tips:

  1. Optimize your code and architecture: Embrace serverless best practices by optimizing your codebase and architecture. Consider minimizing long-running processes, optimizing database queries, and utilizing caching mechanisms to maximize performance and efficiency.
  2. Monitor and debug: Leverage the monitoring and debugging tools provided by Vapor and AWS. Monitor your application’s performance, identify and address any errors or bottlenecks, and make configuration adjustments as needed.
  3. Stay informed and up to date: Keep yourself updated with the latest Vapor documentation, release notes, and community resources. Regularly check for updates, new features, and bug fixes to ensure you’re taking advantage of the latest improvements and optimizations.
  4. Engage with the community: Join Laravel forums, social media groups, and other developer communities to connect with fellow Vapor users. Share your experiences, exchange insights, and seek advice from others who have overcome similar challenges.
  5. Plan for scalability: Design your application with scalability in mind. Utilize Vapor’s automatic scaling capabilities and plan for potential traffic spikes by implementing efficient caching strategies, utilizing queue workers, and optimizing database configurations.

By understanding and addressing potential limitations and following these tips, you can navigate common challenges and make the most of Laravel Vapor’s serverless deployment capabilities for your Laravel applications.

 

Laravel News Links

HOW TO BOOST MYSQL SCALABILITY | 5 EFFECTIVE WAYS

With the increasing data and user demand, ensuring the scalability of your MySQL database has become crucial to maintain optimal performance. With this, you can handle growing amounts of data, traffic, and user requests with your database in MySQL. But how to boost MySQL scalability?

This is the most trending question among MySQL users and if you are one of them, then this article is just for you. You can simply boost MySQL scalability by optimizing MySQL queries, database schema, and server configuration.

In this article, we’ll explore five effective ways to boost MySQL scalability and handle your database’s growth effectively. So, what are you waiting for? Let’s explore them below!

5 Ways To Boost MySQL Scalability

There are a lot of scalability challenges we see with clients over and over. The list could easily include 20, 50, or even 100 items, but we shortened it down to the biggest five issues we see.

1. Tune those queries

By far the biggest bang for your buck is query optimization. Queries can be functionally correct and meet business requirements without being stress tested for high traffic and high load. This is why we often see clients with growing pains, and scalability challenges as their site becomes more popular.

This also makes sense. It wouldn’t necessarily be a good use of time to tune a query for some page off in a remote corner of your site, that didn’t receive real-world traffic. So, some amount of reactive tuning is common and appropriate.

Enable the slow query log and watch it. Use mk-query-digest, the great tool from Maatkit to analyze the log. Also, make sure the log_queries_not_using_indexes flag is set.

Once you’ve found a heavy resource-intensive query, optimize it! Use the EXPLAIN facility, use a profiler, look at index usage and create missing indexes, and understand how it is joining and/or sorting.

2. Employ Master-Master Replication

Master-master active-passive replication, otherwise known as circular replication, can be a boon for high availability, but also for scalability. That’s because you immediately have a read-only slave for your application to hit as well.

Many web applications exhibit an 80/20 split, where 80% of the activity is read or SELECT and the remainder is INSERT and UPDATE. Configure your application to send read traffic to the slave or rearchitect so this is possible. This type of horizontal scalability can then be extended further, adding additional read-only slaves to the infrastructure as necessary.

If you’re setting up replication for the first time, we recommend you do it using hotbackups. Here’s how.

Keep in mind MySQL’s replication has a tendency to drift, often silently from the master. Data can really get out of sync without throwing errors! Be sure to bulletproof your setup with checksums.

3. Use Your Memory

It sounds very basic and straightforward, yet there are often details overlooked. At least be sure to set these:

  • innodb_buffer_pool_size
  • key_buffer_size (MyISAM index caching)
  • query_cache_size – though beware of issues on large SMP boxes
  • thread_cache & table_cache
  • innodb_log_file_size & innodb_log_buffer_size
  • sort_buffer_size, join_buffer_size, read_buffer_size, read_rnd_buffer_size
  • tmp_table_size & max_heap_table_size

4. RAID Your Disk I/O

RAID5 is slow for inserts and updates.  It is also almost non-functional during a rebuild if you lose a disk. Very very slow performance. What should you use instead?

RAID 10 mirroring and striping, with as many disks as you can fit in your server or raid cabinet.  A database does a lot of disk I/O even if you have enough memory to hold the entire database.

Why?  Sorting requires rearranging rows, as does group by, joins, and so forth. Plus, the transaction log is disk I/O as well!

Are you running on EC2?  In that case, EBS is already fault-tolerant and redundant. So, give your performance a boost by striping only across a number of EBS volumes using the Linux md software raid.

5. Tune Key Parameters

These additional parameters can also help a lot with performance.

innodb_flush_log_at_trx_commit=2

This speeds up inserts & updates dramatically by being a little bit lazy about flushing the innodb log buffer.  You can do more research yourself but for most environments this setting is recommended.

innodb_file_per_table

Innodb was developed like Oracle with the tablespace model for storage. Apparently, the kernel developers didn’t do a very good job.  That’s because the default setting to use a single tablespace turns out to be a performance bottleneck.

Contention for file descriptors and so forth.  This setting makes innodb create tablespace and underlying datafile for each table, just like MyISAM does.

Frequently Asked Questions (FAQs)

What Is Scalability in MySQL?

Scalability is crucial to prevent your database from collapsing under an increased amount of traffic. a scalable database can handle large data and big queries in a short period. When it may take too much time reading and writing big data, a scalable database can significantly reduce the time.

How To Make MySQL Database Scalable?

To make your MySQL scalable, you need to handle current requests without duplicating IDs and also multiple masters in MySQL to scale your operations horizontally.

Conclusion

There is no need to tell the importance of scalability in MySQL. For that, all the 5 most efficient ways to boost your MySQL scalability are explained in this article and we hope, a combination of these strategies might be the most effective solution for your specific case. For other queries regarding this topic, don’t hesitate to ask in our comment box below. Thanks for reading!

The post HOW TO BOOST MYSQL SCALABILITY | 5 EFFECTIVE WAYS appeared first on iheavy.

Planet MySQL

Top MySQL DBA Interview Questions (Part 2)

https://www.iheavy.com/wp-content/uploads/2023/08/Top-MySQL-DBA-Interview-Questions.jpg

Continuing from our Top MySQL DBA interview questions (Part 1) here are five more questions that test a MySQL DBA’s knowledge, with two that will help suss out some personality traits.

Top MySQL DBA Interview Questions

6. Disk I/O

Disk performance should be an ever-present concern to a DBA. So, although they don’t need to be a storage specialist, they should have a working knowledge. Ask them about RAID versions, mirroring versus striping, and so forth. Mirroring combines two disks as a unit. Every write is duplicated on both disks.

If you lose one disk, you have an immediate copy. Like a tandem truck that has spare tires running in parallel. Lose one, and you don’t have to pull over immediately to replace it. Striping spreads I/O over multiple disks so you on the one hand increase throughput linearly as you add disks.

That’s because you have more disks working for you.  At the same time, you increase risk with each new disk you add, because the failure rate is then the sum total of all those disks.

For relational databases, the best RAID level is 10, which is striping over mirrored sets. You use more disks, but disks are cheap compared to the hassle of any outage.

If you’re deploying on Amazon, your candidate should be familiar with the Elastic Block Storage offering also known as EBS. This is virtualized storage, so it introduces a whole world of operational flexibility.

No longer do you have to jump through hoops to attach, add or reconfigure storage on your servers. It can all be done through command line API calls. That said EBS suffers from variability problems as with any other shared resource.

Although Amazon guarantees your average throughput, the I/O you get at a given time can swing wildly from low to high. Consider Linux software RAID across multiple EBS volumes to mitigate against this.

7. How Would You Setup Master/Slave & Master/Master Replication?

A basic replication setup involves creating a full dump of the primary database, while it’s tables are locked. The DBA should capture the master status, logfile & position at that time. She should then copy the dump file to the secondary machine & import the full dump.

Finally, the CHANGE MASTER TO statement should be run to point this database instance to its master.  Lastly START SLAVE should be issued.  If all goes well SHOW SLAVE STATUS should show YES for both of these status variables:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Master-master replication is similar, except one additional step. After the above steps have run, you know that your application is not pointing at the slave database. If you’re not sure, verify that fact first.

Now determine the logfile name & position on the slave with SHOW MASTER STATUS. Return to the primary box, and run the CHANGE MASTER TO command to make it slave from the secondary box. You’ve essentially asked MySQL to create a circular loop of replication.

How does MySQL avoid getting into an infinite loop in this scenario?  The server_id variable must be set, and be unique for all MySQL instances in your replication topology.

For extra credit, ask the candidate about replication integrity checking. As important as this piece is to a solid reliable replication setup, many folks in the MySQL world are not aware of the necessity.

Though replication can be setup, and running properly, that does not mean it will keep your data clean and perfect.

Due to the nature of statement-based replication, and non-deterministic functions and/or non-transactional tables, statements can make their way into the binary logs, without completing. What this means is they may then complete on the slave, resulting in a different row set on the same table in the master & slave instance.

Percona’s pt-table-checksum is the preventative tool to use.  It can build checksums of all your tables, and then propagate those checksums through replication to the slave.  An additional check can then be run on the slave side to confirm consistency or show which rows & data are different.

8. How Are Users & Grants Different In MySQL Than Other DBS?

Creating a grant in MySQL can effectively create the user as well.  MySQL users are implemented in a very rudimentary fashion. The biggest misunderstanding in this area surrounds the idea of a user.

In most databases a username is unique by itself.  In MySQL it is the *combination* of user & hostname that must be unique.

So, for example, if I create user sean@localhost, sean@server2 and sean@server3, they are actually three distinct users, which can have distinct passwords, and privileges. It can be very confusing that sean logging in from the local command line has different privileges or password than sean logging in from server2 and server3. So that’s an important point.

9. How Might You Hack A MySQL Server?

This is a good opportunity for the candidate to show some creativity with respect to operations and Linux servers.  There are all sorts of ways into a database server:

  1. bad, weak or unset passwords
  2. files with incorrect permissions – modifying or deleting filesystem files can take a database down or corrupt data
  3. intercepting packets – could reveal unencrypted data inside the database
  4. unpatched software – bugs often reveal vulnerabilities that allow unauthorized entry
  5. moving, disabling or interrupting the backup scripts – a possible timebomb until you need to restore
  6. DNS spoofing, could allow login as a different user
  7. generous permissions – may allow an unprivileged user access to protected data

There are endless possibilities here.  Listening for creative thinking here, reveals how much that person will think thoroughly and effectively about protecting your systems from those same threats.

10. Brain Teasers, Riddles, and Coding Problems

Google for a long time was a fan of these types of tests at interviews, but I’m not at all.  For one thing, you filter for good test takers, and for another, the candidate has no resources – either books or the internet at their disposal.

Why not instead ask them to tell a story? Storytelling conveys a lot of things. It conveys a bit of teaching ability, which extends far beyond internalizing some multiple-choice questions.

It tells you more about their personality, which as I’ve said is very important. It shows how they solve problems, as they’ll take you through their process. And gives them an opportunity to tell you about a real-world triumph they presided over.

Personality Questions

In my experience, some of the most important traits of a new hire center around personality traits, and how they might mix with your existing team. Being punctual for an interview, for instance, sets a precedent for many things. But that door swings both ways, so if you want to hire these types of folks, don’t keep them waiting either!

Pay attention to whether or not the candidate takes some lead in the conversation at all. This can indicate the person is a self-starter.  Obviously, a great candidate will also listen carefully and patiently to what you have to say, but may then take the ball and run with it somewhat.

Listen for signals that the person is active in the field, posting on forums, and attending conferences, meetups, and forums on technology topics. You might also ask them if they blog, and what topics interest them.

Frequently Asked Questions (FAQs)

How Do I Prepare for A DBA Interview?

As a job seeker, you need to have knowledge and experience in database administration in the beginning. While preparing for an interview, you need to review the job description carefully, conduct research on the company and its industry, and then refresh your technical knowledge of DBMS concepts and relevant programming languages accordingly.

What Questions Are Asked in A DBA Interview?

Here are the general questions that are asked in a DBA interview:

  • What purpose does the Model Database Serve?
  • Explain your SQL Server DBA Experience?
  • What is DCL?
  • What is Replication?
  • Why would you use SQL Agent?
  • What is DBCC?
  • What are the recovery models for a database?
  • What is the importance of a recovery model?

What Are the Questions Asked in MySQL Interview?

Here are the questions which are generally asked in MySQL interview:

  • What is MySQL?
  • What are some of the advantages of using MySQL?
  • What do you mean by ‘databases’?
  • What does SQL in MySQL stand for?
  • What does a MySQL database contain?
  • How can you interact with MySQL?
  • What are MySQL Database Queries?
  • What are some of the common MySQL commands?

What Is the Role of DBA In MySQL?

The first role of a database administrator in MySQL is to administer MySQL Server data systems and structures. A DBA can use software to store and organize data, records, or information. They also need to ensure that the data are protected securely from unauthorized access and that users can easily access the information as they need.

Conclusion

The basics that you need to know as a MySQL DBA are discussed in this article and we’ll come again with another part where we’ll discuss more about top MySQL DBA interview questions. Hopefully, now you have got the basic concept of how you need to prepare yourself for a DBA interview after reading this article. Read our next article on the same topic to learn about it in more detail. Till then, have a great day!

The post Top MySQL DBA Interview Questions (Part 2) appeared first on iheavy.

Planet MySQL

Top MySQL DBA Interview Questions (Part 1)

https://www.iheavy.com/wp-content/uploads/2023/08/MySQL-DBA-Interview.webp

MySQL DBAs are in greater demand now than they’ve ever been. While some firms are losing the fight for talent, promising startups with a progressive bent are getting first dibs with the best applicants.

Whatever the case, interviewing for a MySQL DBA is a skill in itself so I thought I’d share a guide of top MySQL DBA interview questions to help with your screening process.
It’s long and detailed with some background to give context so I will be publishing this in two parts.

MySQL DBA Interview

The History of The DBA As A Career

In the Oracle world of enterprise applications, the DBA has long been a strong career path. Companies building their sales staff required Peoplesoft or SAP, and those deploying the financial applications or e-business suite needed operations teams to manage those systems.

At the heart of that operations team were database administrators or DBAs, a catchall title that included the responsibility of guarding your business’s crown jewels. Security of those data assets, backups, management, and performance were all entrusted to the DBA.

In the world of web applications, things have evolved a bit differently. Many a startup are driven only by developers. In those smaller shops, Operations tasks are designated to one developer who takes on the additional responsibility of managing systems.

In that scenario, Operations or DBA duties become a sort of secondary role to the primary one of building the application. Even in cases where the startup creates a specific operations role with one person managing systems administration, chances are they don’t also have DBA experience. Instead, these startups are more likely to manage the database as a typical Linux application.

When I Grow Up I (Don’t) Want To Be A MySQL DBA

Where do they come from, and why don’t a lot of computer science folks gravitate towards operations, and DBA? This may be in part due to the romance of certain job roles which we discussed in a past article, The Mythical MySQL DBA.

This pattern appeared a lot in the Oracle world as well. Many folks who were career DBAs actually moved to that role from the business side. In fact, you’d find that many didn’t have a computer science or engineering background in the first place.

In my experience, I saw many Linux and Unix administrators with a stronger foundation who would fit into the DBA role but were simply not interested in it. The same can be said of the MySQL side of the house.

Computer science grads don’t get out of school aiming for a career in ops or as a DBA because it has never been regarded as the pinnacle. It’s typically the PROGRAMMERS who become the rockstars in a cool startup.

But as the Internet grows into a richer and more complex medium, things are changing. People talk about scalability, high availability, zero downtime, and performance tuning. When brief outages cost millions in losses expectations are very high and that requires skilled, experienced DBAs.

We’ve made a list comprised of skill questions, general questions, and ‘good-to-know’ questions. Have fun grilling your candidate with them, although bear in mind that with interviews it’s not about knowing it all, but rather how the person demonstrates critical thinking skills.

Skills Questions

Here are the top 5 MySQL DBA interview questions:

1. Why Are SQL Queries So Fundamental to Database Performance?

This is the one question that a DBA should have an answer to. If they can’t answer this question, they’re unlikely to be a good candidate.

After a MySQL server is set up and running, with many of the switches and dials set to use memory and play well with other services on the Linux server, queries remain an everyday challenge. Why is this?

SQL queries are like little programs in and of themselves. They ask the database server to collect selections of records, cross-tabulate them with other collections of records, then sort them, and slice and dice them. All of this requires MySQL to build temporary tables, perform resource-intensive sorts, and then organize the output in nice bite-size chunks.

Unfortunately, there are many ways to get the syntax and the results right, yet not do so efficiently. This might sound like a moot point, but with modern websites you may have 5000 concurrent users on your site, each hitting pages that have multiple queries inside them.

What makes this an ongoing challenge is that websites are typically a moving target, with business requirements pushing new code changes all the time. New code means new queries, which pose ongoing risks to application stability.

2. Indexes – Too Many, Too Few; What’s The Difference?

Indexes are very important to the smooth functioning of a relational database. Imagine your telephone book of yore. I can look up all the people with the last name of “Hull” in Manhattan because I have the proper index. But most yellow pages don’t include an index for *first* names even though they might occasionally come in handy, for example with the names “Star” or “Persephone”.

You can imagine that, if you had a phone book that you maintain and update, every time you add or remove a name you also have to update the index. That’s right, and the same goes for your relational database.

So therein lies the trade-off, and it’s an important one. When you are *modifying* your data, adding, updating, or removing records, you must do work to keep the index up to date.

More indexes mean more work. However, when you’re looking up data or *querying* in SQL speak, more indexes mean more ways of looking up data fast. One more trade-off is that indexes take up more pages in your phonebook, and so too they take up more space on disk.

3. Backup & Recovery – Explain Various Types & Scenarios For Restore

Backups come in a few different flavors that the DBA should be familiar with.

Cold backups involve the shutdown of the database server (MySQL) and then back up all the data files by making a copy of them to another directory. To be really thorough, the entire datadir including binlogs, log files, /etc/my.cnf config file should also be backed up. The cold backup is a database in itself, and can be copied to an alternate server and mounted as-is.

Logical backups involve using the mysqldump tool. This locks tables while it runs to maintain consistency of changing data, and can cause downtime. The resulting dump file contains CREATE DATABASE, CREATE TABLE & CREATE INDEX statements to rebuild the database.

Note the file itself is not a database, but rather a set of instructions that can tell a MySQL server *HOW* to reconstruct the database. The important distinction here.

Hot backups are a great addition to the mix as they allow the physical database data files to be backed up *WHILE* the server is up and running. In MySQL, this can be achieved with the xtrabackup tool, available from Percona. Despite the name, it works very well with MyISAM and InnoDB tables too, so don’t worry if you’re not using xtradb tables.

There are a few different restore scenarios, and the candidate should be able to describe how these various backups can be restored, and what the steps to do so would be.

In addition, they should understand what point-in-time recovery is, and how to perform that as well. After restoring one of the above three backup types, the DBA would use the mysqlbinlog utility to apply any subsequent transactions from the binary logs.

So, if the backup was made at 2am last night, and you restore that backup, the mysqlbinlog tool would be used to dig up transactions since 2am, and apply them to that restored database.

4. Troubleshooting Performance

Since this is an ongoing challenge with relational databases, a good grasp of it is crucial. One way to challenge the candidate would be to describe a recent performance problem you experienced with your infrastructure and ask them how they would go about resolving it.

If they struggle with the particulars of what you ran into, ask them to describe a big performance challenge they solved, what the cause was, and how they performed the analysis.

Typically, the first steps involve mitigating the immediate problem by finding out what changed in the environment either operationally or code changes. If there is a bug that was hit or other strange performance anomalies, the first stop is usually looking at log files.

MySQL server error logs and the slow query log are key files. From there, analyzing those files during the timeframe where problems occurred should yield some clues.

You might also hope to hear some comments about metrics collection in this discussion. Tools such as cacti, Munin, OpenNMS, or ganglia are invaluable tools for drilling down on a past event or outage and sifting through server stats to find trouble.

5. Joins – Describe A Few Kinds and How the Server Performs Them

A basic understanding of INNER JOIN and OUTER JOIN would be a great start. A simple example might be employees and departments.

If you have four employees and two departments, an INNER JOIN of these tables together will give you the departments employees belong to.

Add another employee without assigning her to a department, and the inner join won’t display her. Further adding a new department that doesn’t yet contain employees won’t display either.

However, performing an OUTER JOIN will give you those matches with null in the department field, and null in the employee field respectively.

Thought of with another example, take a credit card company. One table contains cardholders identity, their number, address, and other personal information. A second table contains their account activity.

When they first join, they don’t have any monthly statements, so an INNER JOIN of cardholders with statements will yield no rows. However, an OUTER JOIN on those two tables will yield a record, with a null for the statements columns.

Continue reading our article Top MySQL DBA interview questions (Part 2) to learn five more questions that test a MySQL DBA’s knowledge.

The post Top MySQL DBA Interview Questions (Part 1) appeared first on iheavy.

Planet MySQL

Measure Execution Time with timeit() in Python

https://blog.finxter.com/wp-content/uploads/2023/08/pexels-photo-1095601.jpeg

5/5 – (1 vote)

Understanding Timeit in Python

The timeit module is a tool in the Python standard library, designed to measure the execution time of small code snippets. It makes it simple for developers to analyze the performance of their code, allowing them to find areas for optimization.

⏱ The timeit module averages out various factors that affect the execution time, such as the system load and fluctuations in CPU performance. By running the code snippet multiple times and calculating an average execution time, it provides a more reliable measure of your code’s performance.

To get started using timeit, simply import the module and use the timeit() method. This method accepts a code snippet as a string and measures its execution time. Optionally, you can also pass the number parameter to specify how many times the code snippet should be executed.

Here’s a quick example:

import timeit

code_snippet = '''
def example_function():
    return sum(range(10))

example_function()
'''

execution_time = timeit.timeit(code_snippet, number=1000)
print(f"Execution time: {execution_time:.6f} seconds")

Sometimes, you might want to evaluate a code snippet that requires additional imports or setup code. For this purpose, the timeit() method accepts a setup parameter where you can provide any necessary preparation code.

For instance, if we adjust the previous example to include a required import:

import timeit

code_snippet = '''
def example_function():
    return sum(range(10))

example_function()
'''

setup_code = "import math"

execution_time = timeit.timeit(code_snippet, setup=setup_code, number=1000)
print(f"Execution time: {execution_time:.6f} seconds")

Keep in mind that timeit is primarily intended for small code snippets and may not be suitable for benchmarking large-scale applications.

Measuring Execution Time

The primary method of measuring execution time with timeit is the timeit() function. This method runs the provided code repeatedly and returns the total time taken. By default, it repeats the code one million times! Be careful when measuring time-consuming code, as it may take a considerable duration.

import timeit

code_to_test = '''
example_function()
'''

elapsed_time = timeit.timeit(code_to_test, number=1000)
print(f'Elapsed time: {elapsed_time} seconds')

When using the timeit() method, the setup time is excluded from execution time. This way, the measurement is more accurate and focuses on the evaluated code’s performance, without including the time taken to configure the testing environment.

Another useful method in the timeit module is repeat(), which calls the timeit() function multiple times and returns a list of results.

results = timeit.repeat(code_to_test, repeat=3, number=1000)
averaged_result = sum(results) / len(results)
print(f'Average elapsed time: {averaged_result} seconds')

Sometimes it’s necessary to compare the execution speeds of different code snippets to identify the most efficient implementation. With the time.time() function, measuring the execution time of multiple code sections is simplified.

import time

start_time = time.time()
first_example_function()
end_time = time.time()

elapsed_time_1 = end_time - start_time

start_time = time.time()
second_example_function()
end_time = time.time()

elapsed_time_2 = end_time - start_time

print(f'First function elapsed time: {elapsed_time_1} seconds')
print(f'Second function elapsed time: {elapsed_time_2} seconds')

In conclusion, using the timeit module and the time.time() function allows you to accurately measure and compare execution times in Python.

The Timeit Module

To start using the timeit module, simply import it:

import timeit

The core method in the timeit module is the timeit() method used to run a specific code snippet a given number of times, returning the total time taken.

For example, suppose we want to measure the time it takes to square a list of numbers using a list comprehension:

import timeit

code_to_test = """
squared_numbers = [x**2 for x in range(10)]
"""

elapsed_time = timeit.timeit(code_to_test, number=1000)
print("Time taken:", elapsed_time)

If you are using Jupyter Notebook, you can take advantage of the %timeit magic function to conveniently measure the execution time of a single line of code:

%timeit squared_numbers = [x**2 for x in range(10)]

In addition to the timeit() method, the timeit module provides repeat() and autorange() methods.

  • The repeat() method allows you to run the timeit() method multiple times and returns a list of execution times, while
  • the autorange() method automatically determines the number of loops needed for a stable measurement.

Here’s an example using the repeat() method:

import timeit

code_to_test = """
squared_numbers = [x**2 for x in range(10)]
"""

elapsed_times = timeit.repeat(code_to_test, number=1000, repeat=5)
print("Time taken for each run:", elapsed_times)

Using Timeit Function

To measure the execution time of a function, you can use the timeit.timeit() method. This method accepts two main arguments: the stmt and setup. The stmt is a string representing the code snippet that you want to time, while the setup is an optional string that can contain any necessary imports and setup steps. Both default to 'pass' if not provided.

Let’s say you have a function called square() that calculates the square of a given number:

def square(x):
    return x ** 2

To measure the execution time of square() using timeit, you can do the following:

results = timeit.timeit('square(10)', 'from __main__ import square', number=1000)

Here, we’re asking timeit to execute the square(10) function 1000 times and return the total execution time in seconds. You can adjust the number parameter to run the function for a different number of iterations.

Another way to use timeit, especially for testing a callable function, is to use the timeit.Timer class. You can pass the callable function directly as the stmt parameter without the need for a setup string:

timer = timeit.Timer(square, args=(10,))
results = timer.timeit(number=1000)

Now you have your execution time in the results variable, which you can analyze and compare with other functions’ performance.

Examples and Snippets

The simplest way to use timeit.timeit() is by providing a statement as a string, which is the code snippet we want to measure the execution time for.

Here’s an example:

import timeit

code_snippet = "sum(range(100))"
elapsed_time = timeit.timeit(code_snippet, number=1000)
print(f"Execution time: {elapsed_time:.6f} seconds")

In the example above, we measure the time it takes to execute sum(range(100)) 1000 times. The number parameter controls how many repetitions of the code snippet are performed. By default, number=1000000, but you can set it to any value you find suitable.

For more complex code snippets with multiple lines, we can use triple quotes to define a multiline string:

Python Timeit Functions

The timeit module in Python allows you to accurately measure the execution time of small code snippets. It provides two essential functions: timeit.timeit() and timeit.repeat().

⏱ The timeit.timeit() function measures the execution time of a given statement. You can pass the stmt argument as a string containing the code snippet you want to time. By default, timeit.timeit() will execute the statement 1,000,000 times and return the average time taken to run it.

However, you can adjust the number parameter to specify a different number of iterations.

For example:

import timeit

code_to_test = "sum(range(100))"

execution_time = timeit.timeit(code_to_test, number=10000)
print(execution_time)

⏱ The timeit.repeat() function is a convenient way to call timeit.timeit() multiple times. It returns a list of timings for each repetition, allowing you to analyze the results more thoroughly. You can use the repeat parameter to specify the number of repetitions.

Here’s an example:

import timeit

code_to_test = "sum(range(100))"

execution_times = timeit.repeat(code_to_test, number=10000, repeat=5)
print(execution_times)

In some cases, you might need to include additional setup code to prepare your test environment. You can do this using the setup parameter, which allows you to define the necessary setup code as a string. The execution time of the setup code will not be included in the overall timed execution.

import timeit

my_code = '''
def example_function():
    return sum(range(100))

example_function()
'''

setup_code = "from __main__ import example_function"

result = timeit.timeit(my_code, setup=setup_code, number=1000)
print(result)

Measuring Execution Time of Code Blocks

The timeit module provides a straightforward interface for measuring the execution time of small code snippets. You can use this module to measure the time taken by a particular code block in your program.

Here’s a brief example:

import timeit

def some_function():
    # Your code block here

time_taken = timeit.timeit(some_function, number=1)
print(f"Time taken: {time_taken} seconds")

In this example, the timeit.timeit() function measures the time taken to execute the some_function function. The number parameter specifies the number of times the function will be executed, which is set to 1 in this case.

For more accurate results, you can use the timeit.repeat() function, which measures the time taken by the code block execution for multiple iterations.

Here’s an example:

import timeit

def some_function():
    # Your code block here

repeat_count = 5
time_taken = timeit.repeat(some_function, number=1, repeat=repeat_count)
average_time = sum(time_taken) / repeat_count
print(f"Average time taken: {average_time} seconds")

In this example, the some_function function is executed five times, and the average execution time is calculated.

Besides measuring time for standalone functions, you can also measure the time taken by individual code blocks inside a function. Here’s an example:

import timeit

def some_function():
    # Some code here

    start_time = timeit.default_timer()
    # Code block to be measured
    end_time = timeit.default_timer()

    print(f"Time taken for code block: {end_time - start_time} seconds")

In this example, the timeit.default_timer() function captures the start and end times of the specified code block.

Using Timeit with Jupyter Notebook

Using Timeit with Jupyter Notebook

Jupyter Notebook provides an excellent environment for running and testing Python code. To measure the execution time of your code snippets in Jupyter Notebook, you can use the %timeit and %%timeit magic commands, which are built into the IPython kernel.

⏱ The %timeit command is used to measure the execution time of a single line of code. When using it, simply prefix your line of code with %timeit.

For example:

%timeit sum(range(100))

This command will run the code multiple times and provide you with detailed statistics like the average time and standard deviation.

⏱ To measure the execution time of a code block spanning multiple lines, you can use the %%timeit magic command. Place this command at the beginning of a cell in Jupyter Notebook, and it will measure the execution time for the entire cell.

For example:

%%timeit
total = 0
for i in range(100):
    total += i

Managing Garbage Collection and Overhead

When using timeit in Python to measure code execution time, it is essential to be aware of the impact of garbage collection and overhead.

???? Garbage collection is the process of automatically freeing up memory occupied by objects that are no longer in use. This can potentially impact the accuracy of timeit measurements if left unmanaged.

By default, timeit disables garbage collection to avoid interference with the elapsed time calculations. However, you may want to include garbage collection in your measurements if it is a significant part of your code’s execution, or if you want to minimize the overhead and get more realistic results.

To include garbage collection in timeit executions, you can use the gc.enable() function from the gc module and customize your timeit setup.

Here’s an example:

import timeit
import gc

mysetup = "import gc; gc.enable()"
mycode = """
def my_function():
    # Your code here
    pass
my_function()
"""

elapsed_time = timeit.timeit(setup=mysetup, stmt=mycode, number=1000)
print(elapsed_time)

Keep in mind that including garbage collection will likely increase the measured execution time. Manage this overhead by balancing the need for accurate measurements with the need to see the impact of garbage collection on your code.

Additionally, you can use the timeit.repeat() and timeit.autorange() methods to measure execution time of your code snippets multiple times, which can help you capture the variability introduced by garbage collection and other factors.

Choosing the Best Timer for Performance Measurements

Measuring the execution time of your Python code is essential for optimization, and the timeit module offers multiple ways to achieve this. This section will focus on selecting the best timer for measuring performance.

When using the timeit module, it is crucial to choose the right timer function. Different functions may provide various levels of accuracy and be suitable for different use cases. The two main timer functions are time.process_time() and time.perf_counter().

time.process_time() measures the total CPU time used by your code, excluding any time spent during the sleep or wait state. This is useful for focusing on the computational efficiency of your code. This function is platform-independent and has a higher resolution on some operating systems.

Here is an example code snippet:

import time
import timeit

start = time.process_time()

# Your code here

end = time.process_time()
elapsed = end - start
print(f"Execution time: {elapsed} seconds")

On the other hand, time.perf_counter() measures the total elapsed time, including sleep or wait states. This function provides a more accurate measurement of the total time required by your code to execute. This can help in understanding the real-world performance of your code.

Here’s an example using time.perf_counter():

import time
import timeit

start = time.perf_counter()

# Your code here

end = time.perf_counter()
elapsed = end - start
print(f"Execution time: {elapsed} seconds")

In addition to measuring execution time directly, you can also calculate the time difference using the datetime module. This module provides a more human-readable representation of time data.

Here’s an example code snippet that calculates the time difference using datetime:

from datetime import datetime

start = datetime.now()

# Your code here

end = datetime.now()
elapsed = end - start
print(f"Execution time: {elapsed}")

Frequently Asked Questions

How to measure function execution time using timeit?

To measure the execution time of a function using the timeit module, you can use the timeit.timeit() method. First, import the timeit module, and then create a function you want to measure. You can call the timeit.timeit() method with the function’s code and the number of executions as arguments.

For example:

import timeit

def my_function():
    # Your code here

execution_time = timeit.timeit(my_function, number=1000)
print("Execution time:", execution_time)

What is the proper way to use the timeit module in Python?

The proper way to use the timeit module is by following these steps:

  1. Import the timeit module.
  2. Define the code or function to be timed.
  3. Use the timeit.timeit() method to measure the execution time, and optionally specify the number of times the code should be executed.
  4. Print or store the results for further analysis.

How to time Python functions with arguments using timeit?

To time a Python function that takes arguments using timeit, you can use a lambda function or functools.partial(). For example:

import timeit
from functools import partial

def my_function(arg1, arg2):
    # Your code here

# Using a lambda function
time_with_lambda = timeit.timeit(lambda: my_function("arg1", "arg2"), number=1000)

# Using functools.partial()
my_function_partial = partial(my_function, "arg1", "arg2")
time_with_partial = timeit.timeit(my_function_partial, number=1000)

What are the differences between timeit and time modules?

The timeit module is specifically designed for measuring small code snippets’ execution time, while the time module is more generic for working with time-related functions. The timeit module provides more accurate and consistent results for timing code execution, as it disables the garbage collector and uses an internal loop, reducing the impact of external factors.

How to use timeit in a Jupyter Notebook?

In a Jupyter Notebook, use the %%timeit cell magic command to measure the execution time of a code cell:

%%timeit
# Your code here

This will run the code multiple times and provide the average execution time and standard deviation.

What is the best practice for measuring execution time with timeit.repeat()?

The timeit.repeat() method is useful when you want to measure the execution time multiple times and then analyze the results. The best practice is to specify the number of repeats, the number of loops per repeat, and analyze the results to find the fastest, slowest, or average time. For example:

import timeit

def my_function():
    # Your code here

repeat_results = timeit.repeat(my_function, number=1000, repeat=5)
fastest_time = min(repeat_results)
slowest_time = max(repeat_results)
average_time = sum(repeat_results) / len(repeat_results)

Using timeit.repeat() allows you to better understand the function’s performance in different situations and analyze the variability in execution time.

???? Recommended: How to Determine Script Execution Time in Python?

The post Measure Execution Time with timeit() in Python appeared first on Be on the Right Side of Change.

Be on the Right Side of Change

How to Cast Harry Potter Spells on Your iPhone With Siri

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/08/a-wizard-with-a-wand-casting-spell-next-to-harry-potter-books.jpg

If you love Harry Potter and use an iPhone, you’re in luck! Using Siri, your iPhone can become your very own wand. Apple has integrated certain Harry Potter spell commands into iPhones. And with Siri, you can cast your own spells to show off to your friends.

With that said, if you’re ready for a crash course in spell casting, here are some built-in Harry Potter spells you can use on your iPhone.

Built-In Harry Potter Spells You Can Use

As stated earlier, your iPhone has built-in Harry Potter spells (voice commands) that you can activate with Siri. But before you can start casting any spells, you’ll need to enable Siri on your iPhone if you haven’t already.

Once Siri is enabled, you are ready to start casting spells. At the moment, Apple has programmed three Harry Potter spells into its devices: Lumos, Nox, and Accio. Here’s what they do:

Lumos

In Harry Potter, Lumos is used to cast light in a dark room, and it works similarly on your iPhone. When cast, the spell turns on your iPhone’s flashlight. To activate Lumos, say, “Hey Siri, Lumos.” Lumos is pronounced “loo-mows.”

Nox

Nox is the counter spell to Lumos. So saying, “Hey Siri, Nox” will turn off your iPhone’s flashlight when you’re done using it. And if you weren’t aware already, Nox is pronounced “knocks.”

Accio

In Harry Potter, Accio is a summoning charm to draw objects toward the caster. In the case of your iPhone, Accio is used to open an app on your phone.

So saying, “Hey Siri, Accio [app name]” will open the app in question. For instance, saying “Hey Siri, Accio Spotify” will open the Spotify app on your iPhone. Accio is pronounced “Ack-ee-oh.”

How to Create Your Own Harry Potter Siri Spells

Besides those three spells, Apple lets you create additional spells for Siri commands. To do this, you must manually set up the spells and their corresponding actions using the Shortcuts app on your iPhone.

For example, let’s set up the spell “Lumos Maxima.” This spell turns on your iPhone’s flashlight at the highest brightness level. Here’s what you need to do:

  1. Open the Shortcuts app, head to the Shortcuts section, and tap the plus (+) icon at the top-right corner.
  2. Then, tap New Shortcut at the top to rename it to the name of your spell. In this case, you’ll want to rename it to Lumos Maxima.
  3. Tap Add Action.
  4. From the menu that appears, tap Set Flashlight.
  5. Now, tap the blue arrow (next to On) and drag the slider all the way to the right.
  6. Tap the Play button at the bottom of the screen to test it out. Then, tap Done.

Now, say, “Hey Siri, Lumos Maxima!” to see if the spell command worked as intended.

Turn Your iPhone Into a Wand With Siri

Thanks to Siri, you have your very own wand at your fingertips. Now, you can perform simple tasks at any instance with these spells. Like Harry Potter, what you choose to do with this power is up to you.

You can try out different spells, such as Silencio to activate Do Not Disturb or Muffliato and Sonorus to control your iPhone’s volume.

MakeUseOf

How to Generate and Read Barcodes With Python

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/06/Barcode.jpg

When you purchase an item from a store, the parallel black stripes of varying widths on the item you purchase is called barcode. Barcodes are a method of representing data in a visual, machine-readable format. Barcodes are used to store information about products for easy identification and tracking. Various industries use barcodes for inventory management.

Using Python you can generate barcodes, scan and read the content of a barcode.

How to Generate and Customize Barcodes

The following steps show how to generate barcodes using the python-barcode library.

1. Install the Required Library

Open your terminal or command prompt and run the following pip command to install the required library. Ensure you have pip installed on your machine.

 pip install python-barcode

2. Import the Required Modules

In your script, include the following import statements to import the modules needed for barcode generation.

 import barcode
from the barcode.writer import ImageWriter

Writers handle the generation and saving of barcode images in different formats. The python-barcode library provides different barcode writers. Here you’re going to use the ImageWriter class which renders barcodes as images.

3. Code to Generate Barcode

The python-barcode library offers various barcode formats, such as Code39, Code128, EAN13, and ISBN-10 for generating barcodes.

 def generate_barcode(data, barcode_format, options=None):
    
    barcode_class = barcode.get_barcode_class(barcode_format)
    
    barcode_image = barcode_class(data, writer=ImageWriter())
    
    barcode_image.save("barcode", options=options)

The generate_barcode function generates a barcode based on the given data and format (barcode_format) and saves the barcode image to a file, barcode.png. The file extension depends on the writer class you use.

4. Generate and Customize Barcode

To generate a barcode, call the generate_barcode function and pass the required parameters.

 generate_barcode("MakeUseOf", "code128")

Writers take several options that allow you to customize barcodes. Customization options include modifying the size, font, color of the barcode, and so on. You can refer to the python-barcode documentation to access the complete list of common writer options.

 
generate_barcode("MakeUseOf", "code128", options={"foreground":"red",
                                                  "center_text": False,
                                                  "module_width":0.4,
                                                  "module_height":20})

How to Scan and Decode Barcodes

The following steps show how to scan and decode barcodes using the Python pyzbar library.

1. Install the Required Libraries

To scan and decode barcodes, you need to install the following libraries:

 brew install zbar 
sudo apt-get install libzbar0
pip install pyzbar opencv-python

2. Import the Required Modules

After installing the libraries, add the following import statements to your script to import the necessary modules.

 import cv2
from pyzbar import pyzbar

3. Scan Barcodes From Images

To scan barcodes from image files:

  1. Load the image using OpenCV’s imread function. This returns an instance of numpy.ndarray.
  2. Pass the output array to pyzbar.decode for detection and decoding. You can also pass instances of PIL.Image.
 def scan_barcode_from_image(image_path):
    
    image = cv2.imread(image_path)
    
    barcodes = pyzbar.decode(image)
    
    for barcode in barcodes:
        
        barcode_data = barcode.data.decode("utf-8")
        barcode_type = barcode.type
        print("Barcode Data:", barcode_data)
        print("Barcode Type:", barcode_type)

The function takes an image_path parameter, reads the image, decodes any barcodes present in the image, and prints the decoded data and barcode type for each detected barcode.

 scan_barcode_from_image("barcode.png")
> Barcode Data: MakeUseOf
> Barcode Type: CODE128

4. Scan Barcodes From Webcam Stream

You can also scan and read barcodes in real-time from a webcam stream with the help of the Python OpenCV library.

 def scan_barcode_from_webcam():
    
    video_capture = cv2.VideoCapture(0)

    while True:
        
        _, frame = video_capture.read()

        
        barcodes = pyzbar.decode(frame)

        
        for barcode in barcodes:
            
            barcode_data = barcode.data.decode("utf-8")
            barcode_type = barcode.type
            print("Barcode Data:", barcode_data)
            print("Barcode Type:", barcode_type)

        
        if cv2.waitKey(1) & 0xFF == ord("q"):
            break

    
    video_capture.release()
    cv2.destroyAllWindows()


scan_barcode_from_webcam()

The scan_barcode_from_webcam function continuously captures frames from the webcam, decodes any barcodes present in the frame, extracts information about the barcode, and prints the information. To quit press the letter q on your keyboard.

Generating Barcodes and QR Codes in Python

With Python, generating and reading barcodes becomes accessible and efficient. By following the steps outlined, you can generate a variety of barcodes to suit your needs.

QR codes (Quick Response codes) are two-dimensional barcodes that can be scanned and read by smartphones, tablets, or other devices equipped with a camera and a QR code reader application. Using the Python qrcode library you can generate, scan and read QR Codes efficiently.

MakeUseOf

A Guide to Encryption and Hashing in Laravel

https://ashallendesign.ams3.cdn.digitaloceanspaces.com/public/blog/128/a-guide-to-encryption-and-hashing-in-laravel.png

Every web developer should be aware of hashing and encryption because they are vital security concepts. When used correctly, they can improve web application security and help ensure the privacy of users’ personally identifiable information (PII).

In this article, we’ll explore hashing and encryption, including their differences. We’ll also learn how to use them within Laravel projects to improve application security. We’ll cover how to hash data and compare it to plain-text values. We’ll also explain the different ways that you can encrypt and decrypt data in Laravel.

What Are Hashing and Encryption?

As a web developer, you’ve likely encountered the terms “encryption” and “hashing”. You may have even used them in your own projects, but what do they mean, and how are they different?

These terms are sometimes used interchangeably, but they are different techniques with their own use cases.

Encryption and hashing are both forms of “cryptographic security” and consist of using mathematical algorithms to transform data into a form that is unreadable to humans and (mostly) secure.

The world of cryptography is complex and relies heavily on mathematics. Therefore, for the sake of this article, we’ll keep things simple and focus on the high-level concepts of both techniques and how they can be used in your Laravel applications.

Encryption

Let’s start by taking a look at encryption.

Encryption is the process of encoding information (usually referred to as “plaintext”) into an alternative form (usually referred to as “ciphertext”). Typically, ciphertext is generated using a “key” and an encryption algorithm. It can then only be decrypted and read by someone who has the correct key, depending on the type of encryption used.

Encryption can come in two forms: “symmetric” and “asymmetric”.

Symmetric encryption uses the same key for both encryption and decryption. This means that the same key is used to encrypt and decrypt the data. Examples of this type of encryption include Advanced Encryption Standard (AES) and Data Encryption Standard (DES). Laravel uses AES-256-CBC encryption by default.

Asymmetric encryption is a bit more complicated and uses a pair of keys: a public key and a private key. The public key is used to encrypt the data, and the private key is used to decrypt the data. Thus, the public key can be shared with anyone, but the private key must be kept secret, as anyone with it can successfully decrypt the data. Examples of this type of encryption include Rivest–Shamir–Adleman (RSA) and Diffie-Hellman.

As an example of what an encrypted string might look like, let’s say that we have a string of text that we want to encrypt. We’ll use the following string as an example:

        

1This is a secret message.

If we encrypted this text using AES-256-CBC encryption and used U2x2QdvosFTtk5nL0ejrKqLFP1tUDtSt as the key, we would get the following ciphertext:

        

1eyJpdiI6IjF1cmF0YU5TMkRnR3NUMVRMMm1udFE9PSIsInZhbHVlIjoieGloYW5VVWtXV2hjcVRVY3hGTkZ1bDdoOVBSZEo1VkVWNE1LSlB5S0lhNkF3SloxeWhRejNwbjN5SEgxeUJXayIsIm1hYyI6IjljNzY0MTBmMGJlZmRjNzcwMjFiMmFjYmJhNTNkNWVhODkxMTgzYmYwMjA3N2YzMjM1YmVhZWU4NDRiOTYzZWQiLCJ0YWciOiIifQ==

If we were then to decrypt the above ciphertext using the same key, we would get the original plaintext:

        

1This is a secret message.

Hashing

Hashing is a bit different from encryption. It is a one-way process, meaning that when data is “hashed”, it’s transformed into what is referred to as a “hash value”. However, unlike encryption, you cannot reverse the process and get the original data. Hence, the hash value is not reversible.

Typically, you’d want to use a hash in a situation where you don’t want any data to be retrieved in the event of a leak. For example, you may want to store a user’s password in your database but wouldn’t want to store it in plain text in case the database was ever compromised. If you did store it in plain text, you would potentially be giving malicious hackers a list of passwords in use. In an ideal world, all users would use unique passwords, but we all know that this isn’t the case. Therefore, a leaked database of plain-text passwords would be a goldmine for hackers because they could then try to use the passwords on other websites or applications. To combat this problem, we store passwords in a hashed format.

For example, let’s say that a user’s password is “hello”. If we were to use the bcrypt algorithm to hash this password, it would look something like this:

        

1$2y$10$XY2DMYKvLrMj7yqYwyvK5OSvKTA6HOoPTpe3gVpVP5.Y4kN1nbOLq

An important part of hashing is the use of a “salt”. A salt is a random string of characters used to make the hashed values of the same data different. This means that if two users have the same password, their hashed passwords would be stored differently in the database. This is important because it would stop hackers from being able to deduce which users had the same password. For instance, the following strings are all hashed versions of hello:

        

1$2y$10$NUgYbLrzxn471GzcIN10wedXEcltcbAasHqU7hCeMFv4aCTl/6bVW

        

1$2y$10$AvBxO6HCRwYPNPZmeERIEOzLAJP7ZkcjrekdzaRLwY8YX4m9VJiFy

        

1$2y$10$YQ3lzNx8h0tDgw4K3dzJAOxycZhDhTAnueSugbmoo3NDTuq1OT8KW

You’re probably thinking, “If it’s hashed and can’t be reversed, how do we know if the password is correct?”. We can do this by comparing the hash value of the password that the user has entered with the hash value of the password that is stored in the database by using the password_verify() function PHP provides. If they match, then we know that the password is correct. If they don’t match, then we know that the password is incorrect. Later in this article, we’ll cover how you can do this comparison in Laravel.

Many hashing algorithms are available, such as bcrypt, argon2i, argon2id, md5, sha1, sha256, sha512, and many more. However, when dealing with passwords, you should always use a hashing algorithm designed to be slow, such as bcrypt, because it makes it more difficult for hackers to brute-force the passwords.

Hashing in Laravel

Now that we have a basic idea of what hashing is, let’s take a look at how hashing works within the Laravel framework.

Hashing Passwords

As mentioned previously, you don’t want to store users’ passwords as plain text in a database. Instead, you’ll want to store their passwords in a hashed format. This is where the Hash facade comes into play.

By default, Laravel supports three different hashing algorithms: “Bcrypt”, “Argon2i”, and “Argon2id”. By default, Laravel uses the “Bcrypt” algorithm, but if you want to use a different one, Laravel allows you to change it. We’ll cover this topic in more detail later in the article.

To get started with hashing a value, you can use the Hash facade. For example, let’s say that we want to hash the password “hello”:

        

1use Illuminate\Support\Facades\Hash;

2 

3$hashedValue = Hash::make('hello');

4 

5// $hashedValue = $2y$10$XY2DMYKvLrMj7yqYwyvK5OSvKTA6HOoPTpe3gVpVP5.Y4kN1nbOLq

As you can see, it’s really easy to hash a value.

To give this a bit of context, let’s look at how it might work in a controller in your application. Imagine that you have a PasswordController that allows authenticated users to update their password. The controller may look something like so:

        

1namespace App\Http\Controllers;

2 

3use App\Http\Controllers\Controller;

4use Illuminate\Http\Request;

5use Illuminate\Support\Facades\Hash;

6 

7final class PasswordController extends Controller

8{

9 public function update(Request $request)

10 {

11 // Validate the new password here...

12 

13 $request->user()->fill([

14 'password' => Hash::make($request->newPassword)

15 ])->save();

16 }

17}

Comparing Hashed Values to Plain Text

As previously mentioned, it’s not possible to reverse a hashed value. Therefore, if we want to determine the hashed value, we can only do so by verifying the hashed value against a plain-text value. This is where the Hash::check() method comes into play.

Let’s imagine that we want to determine whether the “hello” plain-text password is the same as the hashed password that we created earlier. We can do this by using the Hash::check() method:

        

1$plainTextPassword = 'hello';

2$hashedPassword = "$2y$10$XY2DMYKvLrMj7yqYwyvK5OSvKTA6HOoPTpe3gVpVP5.Y4kN1nbOLq";

3 

4if (Hash::check($plainTextPassword, $hashedPassword)) {

5 // The passwords match...

6} else {

7 // The passwords do not match...

8}

The Hash::check() method will return true if the plain-text password matches the hashed password. Otherwise, it will return false.

This type of approach is what would typically be used when a user is logging into your application. If we were to ignore any additional security measures (such as rate limiting) for our login form, your login flow might be following the steps below:

  • The user provides an email and password.
  • Attempt to retrieve a user from the database with the given email.
  • If a user is found, then compare the given password with the hashed password stored in the database by using the Hash::check method.
  • If the Hash::check method returns true, then the user has successfully logged in. Otherwise, they’ve entered the incorrect password.

Hash Drivers in Laravel

Laravel provides the functionality for you to choose between different hashing algorithms. By default, Laravel uses the “Bcrypt” algorithm, but you can change it to either the “Argon2i” or the “Argon2id” algorithm, which are also supported. Alternatively, you can implement your own hashing algorithm, but this is strongly discouraged because you may introduce security vulnerabilities into your application. Instead, you should use one of the algorithms provided through PHP so that you can be sure the algorithms are tried and tested.

To change the hashing algorithm being used across your entire application, you can change the driver value in the config/hashing.php config file. The default value is bcrypt, but you can change this to either argon or argon2id, like so:

        

1return [

2 

3 // ...

4 

5 'driver' => 'bcrypt',

6 

7 // ...

8 

9];

Alternatively, if you’d prefer to explicitly define the algorithm that should be used, you can use the driver method on Hash facade to determine which hashing driver is used. For example, if you wanted to use the “Argon2i” algorithm, you could do the following:

        

1$hashedValue = Hash::driver('argon')->make('hello');

Encryption in Laravel

Let’s now take a look at how to use encryption in Laravel.

Encrypting and Decrypting Values

To get started with encrypting values in Laravel, you can make use of the encrypt() and decrypt() helper functions or the Crypt facade, which provides the same functionality. For example, let’s say that we want to encrypt the string “hello”:

        

1$encryptedValue = encrypt('hello');

By running the above, the encryptedValue variable would now be equal to something like this:

        

1eyJpdiI6IitBcjVRanJTN3hTdnV6REdScVZFMFE9PSIsInZhbHVlIjoiZGcycC9pTmNKRjU3RWpmeW1GdFErdz09IiwibWFjIjoiODg2N2U0ZTQ1NDM3YjhhNTFjMjFmNmE4OTA2NDI0NzRhZmI2YTg5NzEwYjdmY2VlMjFhMGZhYzE5MGI2NDA3NCIsInRhZyI6IiJ9

Now that we have the encrypted value, we can decrypt it by using the decrypt() helper function:

        

1$encryptedValue = 'eyJpdiI6IitBcjVRanJTN3hTdnV6REdScVZFMFE9PSIsInZhbHVlIjoiZGcycC9pTmNKRjU3RWpmeW1GdFErdz09IiwibWFjIjoiODg2N2U0ZTQ1NDM3YjhhNTFjMjFmNmE4OTA2NDI0NzRhZmI2YTg5NzEwYjdmY2VlMjFhMGZhYzE5MGI2NDA3NCIsInRhZyI6IiJ9';

2 

3$decryptedValue = decrypt($encryptedValue);

By running the above, the decryptedValue variable would now be equal to "hello".

If any incorrect data were passed to the decrypt() helper function, an Illuminate\Contracts\Encryption\DecryptException exception would be thrown.

As you can see, encrypting and decrypting data in Laravel is relatively easy. It can simplify the process of storing sensitive data in your database and then decrypting it when you need to use it.

Changing the Encryption Key and Algorithm

As mentioned earlier in the article, Laravel uses the “AES-256” symmetric encryption algorithm by default, meaning that a single key is used for encrypting and decrypting data. By default, this key is the APP_KEY value defined in your .env file. This is very important to remember because if you change your application’s APP_KEY, then any encrypted data that you have stored can no longer be decrypted (without making changes to your code to explicitly use the older key).

If you wish to change the encryption key without changing your APP_KEY, you can do so by changing the key value in the config/app.php config file. Likewise, you can also change the cipher value to specify the encryption algorithm used. The default value is AES-256-CBC, but can be changed to AES-128-CBC, AES-128-GCM, or AES-256-GCM.

Automatically Encrypting Model Attributes

If your application is storing sensitive information in the database, such as API keys or PII, you can use make use of Laravel’s encrypted model cast. This model cast automatically encrypts data before storing it in the database and then decrypts it when it’s retrieved.

This is a useful feature because it allows you to continue working with your data as if it weren’t encrypted, but it’s stored in an encrypted format.

For example, to encrypt the my_secret_field on a User model, you could update your model like so:

        

1class User extends Model

2{

3 protected $casts = [

4 'my_secret_field' => 'encrypted',

5 ];

6}

Now that it’s defined as an accessor, you can continue using the field as usual. Thus, if we wanted to update the value stored in the my_secret_field, we could still use the update method like so:

        

1$user->update(['my_secret_field' => 'hello123']);

Notice how we didn’t need to encrypt the data before passing it to the update method.

If you were to now inspect the row in the database, you wouldn’t see "hello123" in the user’s my_secret_field field. Instead, you’d see an encrypted version of it, such as the following:

        

1eyJpdiI6IjM3MUxuV0lKc2RjSGNYT2dXanhKeXc9PSIsInZhbHVlIjoiNmxPZjUray9ZV21Ba1RnRkFNdHRTZz09IiwibWFjIjoiNTNlNmU0YTY5OGFjZWU2OGJiYzY4OWYzYzExYjMzNTI0MDQ2YTJiM2M4YWZkMjkyMGQxNmQ2MmYwNzQyNGFjYSIsInRhZyI6IiJ9

Thanks to the encrypted model cast, we would still be able to use the intended value of the field. For example,

        

1$result = $user->my_secret_field;

2 

3// $result is equal to: "hello123"

As you can imagine, using the encrypted model cast is a great way to quickly add encryption to an application without having to manually encrypt and decrypt the data. Hence, it is a quick way to improve data security.

However, it does have some limitations, of which you should be aware. First, because the encryption and decryption are run when storing and fetching the Model, calls using the DB facade won’t automatically do the same. Therefore, if you intend to make any queries to the database using the DB facade (rather than using something like Model::find() or Model::get()), you’ll need to manually handle the encryption.

Furthermore, it’s worth noting that although encrypting fields in the database improves security, it doesn’t mean the data are completely secure. If a malicious attacker finds the encryption key, they could decrypt the data and access the sensitive information. Therefore, encrypting the fields is beneficial only if the database is compromised. If you’re using a key (such as your application’s APP_KEY) that’s stored in your .env file, then a breach of your application server would also allow the attacker to decrypt the data.

Using a Custom Encryption Key When Manually Encrypting Data

When encrypting and decrypting data, there may be times when you want to use a custom encryption key. You might want to do this to avoid coupling your encrypted data to your application’s APP_KEY value. Alternatively, you may want to give users the ability to define their own encryption keys so that (theoretically) only they can decrypt their own data.

If you’re manually encrypting and decrypting data, to define your own encryption key, you can instantiate the \Illuminate\Encryption\Encrypter class and pass the key to the constructor. For example, let’s imagine that we want to encrypt some data using a different encryption key. Our code may look something like so:

        

1use Illuminate\Encryption\Encrypter;

2 

3// Our custom encryption key:

4$key = 'U2x2QdvosFTtk5nL0ejrKqLFP1tUDtSt';

5 

6$encrypter = new Encrypter(

7 key: $key,

8 cipher: config('app.cipher'),

9);

10 

11$encryptedValue = $encrypter->encrypt('hello');

As you can see, this is easy to do and adds the flexibility to use a custom encryption key. Now, if we were to try and decrypt the $encryptedValue variable, we would need to use the same key that we used to encrypt it and wouldn’t be able to run the decrypt() helper function because it wouldn’t be using the correct key.

Using a Custom Encryption Key When Using Model Casts

If you’re using the encrypted model cast, as we’ve covered earlier in this article, and you want to use a custom key for the encrypted fields, you can define your own encrypter for Laravel to use by using the Model::encryptUsing method.

We’d typically want to do this within a service provider (such as the App\Providers\AppServiceProvider) so that the custom encrypter is defined and ready to use when the application starts.

Let’s take a look at an example of how we could use the Model::encryptUsing method within our AppServiceProvider:

        

1declare(strict_types=1);

2 

3namespace App\Providers;

4 

5use Illuminate\Database\Eloquent\Model;

6use Illuminate\Encryption\Encrypter;

7 

8final class AppServiceProvider extends ServiceProvider

9{

10 public function boot(): void

11 {

12 $this->defineCustomModelEncrypter();

13 }

14 

15 private function defineCustomModelEncrypter(): void

16 {

17 // Our custom encryption key:

18 $key = 'U2x2QdvosFTtk5nL0ejrKqLFP1tUDtSt';

19 

20 $encrypter = new Encrypter(

21 key: $key,

22 cipher: config('app.cipher'),

23 );

24 

25 Model::encryptUsing($encrypter);

26 }

27 

28 // ...

29}

As you can see in the example above, defining the custom encrypter is very similar to how we defined it manually earlier in this article. The only difference is that we’re passing the Encrypter object to the Model::encryptUsing method so that Laravel can use it behind-the-scenes for us.

Encrypting Your .env File

As of Laravel 9.32.0, your application’s .env file can also be encrypted. This is useful if you want to store sensitive information in your .env file to your source control (such as git) but don’t want to store it in plain text. This is also beneficial because it allows a local versions of your .env config variables to be shared with other developers on your team for local development purposes.

To encrypt your environment file, you’ll need to run the following command in your project root:

This will generate an output similar to this:

        

1INFO Environment successfully encrypted.

2

3Key ........................ base64:amNvB/EvaX1xU+5R9Z37MeKR8gyeIRxh1Ku0pqNlK1Y

4Cipher ............................................................ AES-256-CBC

5Encrypted file ................................................. .env.encrypted

As we can see from the output, the command has used the key base64:amNvB/EvaX1xU+5R9Z37MeKR8gyeIRxh1Ku0pqNlK1Y to encrypt the .env file using the AES-256-CBC cipher and then stored the encrypted value in the .env.encrypted file.

This means that we can now store that file in our source control, and other developers on our team can use the same key to decrypt the file and use the values within it.

To decrypt the file, you need to run the php artisan env:decrypt command and pass the key used to encrypt the file. For example,

        

1php artisan env:decrypt base64:amNvB/EvaX1xU+5R9Z37MeKR8gyeIRxh1Ku0pqNlK1Y

This will then decrypt the .env.encrypted file and store the decrypted values in the .env file.

Conclusion

Hopefully, reading this article has given you a basic understanding hashing and encryption, as well as the differences between them. It should also have given you the confidence to use both concepts within Laravel projects to improve the security of your applications.

Laravel News Links