What To Do When MySQL Runs Out of Memory: Troubleshooting Guide
Troubleshooting crashes is never a fun task, especially if MySQL does not report the cause of the crash. For example, when MySQL runs out of memory. Peter Zaitsev wrote a blog post in 2012: Troubleshooting MySQL Memory Usage with a lots of useful tips. With the new versions of MySQL (5.7+) and performance_schema we have the ability to troubleshoot MySQL memory allocation much more easily.
In this blog post I will show you how to use it.
First of all, there are 3 major cases when MySQL will crash due to running out of memory:
- MySQL tries to allocate more memory than available because we specifically told it to do so. For example: you did not set innodb_buffer_pool_size correctly. This is very easy to fix
- There is some other process(es) on the server that allocates RAM. It can be the application (java, python, php), web server or even the backup (i.e. mysqldump). When the source of the problem is identified, it is straightforward to fix.
- Memory leaks in MySQL. This is a worst case scenario, and we need to troubleshoot.
Where to start troubleshooting MySQL memory leaks
Here is what we can start with (assuming it is a Linux server):
Part 1: Linux OS and config check
- Identify the crash by checking mysql error log and Linux log file (i.e. /var/log/messages or /var/log/syslog). You may see an entry saying that OOM Killer killed MySQL. Whenever MySQL has been killed by OOM “dmesg” also shows details about the circumstances surrounding it.
- Check the available RAM:
- free –g
- cat /proc/meminfo
- Check what applications are using RAM: “top” or “htop” (see the resident vs virtual memory)
- Check mysql configuration: check /etc/my.cnf or in general /etc/my* (including /etc/mysql/* and other files). MySQL may be running with the different my.cnf (run
ps ax| grep mysql ) - Run
vmstat 5 5 to see if the system is reading/writing via virtual memory and if it is swapping - For non-production environments we can use other tools (like Valgrind, gdb, etc) to examine MySQL usage
Part 2: Checks inside MySQL
Now we can check things inside MySQL to look for potential MySQL memory leaks.
MySQL allocates memory in tons of places. Especially:
- Table cache
- Performance_schema (run:
show engine performance_schema status and look at the last line). That may be the cause for the systems with small amount of RAM, i.e. 1G or less - InnoDB (run
show engine innodb status and check the buffer pool section, memory allocated for buffer_pool and related caches) - Temporary tables in RAM (find all in-memory tables by running:
select * from information_schema.tables where engine=‘MEMORY’ ) - Prepared statements, when it is not deallocated (check the number of prepared commands via deallocate command by running show global status like ‘
Com_prepare_sql‘;show global status like ‘Com_dealloc_sql’ )
The good news is: starting with MySQL 5.7 we have memory allocation in performance_schema. Here is how we can use it
- First, we need to enable collecting memory metrics. Run:
UPDATE setup_instruments SET ENABLED = ‘YES’
WHERE NAME LIKE ‘memory/%’;
- Run the report from sys schema:
select event_name, current_alloc, high_alloc
from sys.memory_global_by_current_bytes
where current_count > 0;
- Usually this will give you the place in code when memory is allocated. It is usually self-explanatory. In some cases we can search for bugs or we might need to check the MySQL source code.
For example, for the bug where memory was over-allocated in triggers (https://bugs.mysql.com/bug.php?id=86821) the select shows:
mysql> select event_name, current_alloc, high_alloc from memory_global_by_current_bytes where current_count > 0; +——————————————————————————–+—————+————-+ | event_name | current_alloc | high_alloc | +——————————————————————————–+—————+————-+ | memory/innodb/buf_buf_pool | 7.29 GiB | 7.29 GiB | | memory/sql/sp_head::main_mem_root | 3.21 GiB | 3.62 GiB | … |
The largest chunk of RAM is usually the buffer pool but ~3G in stored procedures seems to be too high.
According to the MySQL source code documentation, sp_head represents one instance of a stored program which might be of any type (stored procedure, function, trigger, event). In the above case we have a potential memory leak.
In addition we can get a total report for each higher level event if we want to see from the birds eye what is eating memory:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
mysql> select substring_index( –> substring_index(event_name, ‘/’, 2), –> ‘/’, –> -1 –> ) as event_type, –> round(sum(CURRENT_NUMBER_OF_BYTES_USED)/1024/1024, 2) as MB_CURRENTLY_USED –> from performance_schema.memory_summary_global_by_event_name –> group by event_type –> having MB_CURRENTLY_USED>0; +——————–+——————-+ | event_type | MB_CURRENTLY_USED | +——————–+——————-+ | innodb | 0.61 | | memory | 0.21 | | performance_schema | 106.26 | | sql | 0.79 | +——————–+——————-+ 4 rows in set (0.00 sec) |
I hope those simple steps can help troubleshoot MySQL crashes due to running out of memory.
Links to more resources that might be of interest
Related
via Planet MySQL
What To Do When MySQL Runs Out of Memory: Troubleshooting Guide