HeatWave for MySQL â Technical Deep Dive
https://ift.tt/389WcVY
In the previous blog we provided insights on how the MySQL database has been enhanced to integrate with a native high performance analytics engine â HeatWave: https://blogs.oracle.com/mysql/breakthrough-enhancements-in-mysql-database-service-with-analytics-engine
HeatWave is a new, massively parallel in-memory query engine developed by Oracle Labs designed for accelerating analytic workloads. HeatWave provides 100x-1000x acceleration over MySQL and MySQL compatible databases like Aurora.Â
There are four key architecture choices which provide compelling performance with HeatWave:
- An in-memory hybrid columnar format which is conducive to vector processing.
- A massively parallel architecture which is enabled by a massively partitioned architecture.
- State of the art algorithms for distributed analytic algorithms.
- The engine is optimized for Oracle Cloud Infrastructure.
Architecture Overview of HeatWave
HeatWave uses a columnar in-memory representation that facilitates vectorized processing leading to very good query performance (Figure 1). The data is encoded and compressed prior to being loaded in memory. This compressed and optimized in memory representation is used for both numeric and string data. This results in significant reduced memory footprint which translates to reduced cost for customer. The data organization in memory and query processing is structured such that it is amenable to vector and SIMD processing which leads to reduced interpretation overhead for queries and improves query performance.
Figure 1. Vectorized in-memory columnar representation for analytic processingÂ
One of the key design points of HeatWave is to massively partition data across a cluster of analytics nodes, which can be operated upon in parallel in each node (Figure 2). The partitioning is done at near memory bandwidth and the size of the partitions is optimizes for the underlying cache. This enables high cache hits for analytic operations and provides good inter-node scalability. Each analytics node within a cluster and each core within a node can process partitioned data in parallel, including parallel scans, joins, group-by, aggregation and top-k processing.Â
Figure 2. Massively parallel architecture
Data partitioning in the memory of HeatWave can be driven by a workload based mechanism which minimizes data movement across cores or nodes which improves scalability. Partitioning by workload dependent column reduces data movement for queries with join (or group-by) predicates since it provides co-location of data across tables in the join condition.Â
HeatWave has implemented state of art algorithms for distributed analytic processing (Figure 3). Joins within a partition are processed fast by using vectorized build and probe join kernels. Network communication between analytics nodes is optimized by using asynchronous batch I/Os.
Figure 3. Distributed analytic algorithms optimized for OCIÂ
Furthermore, HeatWave has an intelligent scheduler which efficiently overlaps computation tasks with network communication tasks such that while the data is being transferred between nodes, each node is busy doing some computation (Figure 4). This helps achieve good scalability across a large number of servers.
Â
Figure 4. Intelligent scheduler to overlap compute time with network communication
HeatWave uses AMD based VM shapes in OCI which provide the best price per GB of DRAM. Various aspects of query processing have been optimized for Oracle Cloud Infrastructure. For example, the granularity of partitioning has been optimized for the cache size of the underlying compute shape, algorithms are NUMA aware, and the query cost model factors in the network bandwidth available in OCI.
Machine Learning Based Automation
HeatWave leverages auto machine learning (AutoML) capabilities developed in Oracle Labs to automate various aspects of the service. This automation is helpful both for users of the service and engineers managing the service. Since this automation is based on machine learning, the system can intelligently predict and act upon various scenarios.Â
Auto Provisioning provides a recommendation on how many nodes are required to run a workload. When the service is started, database tables on which analytics queries are run need to be loaded to memory of HeatWave. The size of the HeatWave cluster needed depends on tables and columns required to load data, and the compression achieved in memory for this data. Figure 5 compares the traditional (i.e., manual) approach to estimating the cluster size with Auto Provisioning. In traditional provisioning, the user needs to guess a cluster size. Underestimation results in data load or query execution failure due to space limitations. Overestimation results in additional costs for unneeded resources. As a result, users iterate until they determine the right cluster size and this size estimate becomes inaccurate when tables are updated.Â
The right side of figure 5 shows how we solve this problem with Auto Provisioning, which is a ML-based cluster size estimation advisor. By leveraging well trained and accurate ML models, the user consults the Auto Provisioning model to obtain the right cluster size for their dataset. As a result, users do not need to guess the cluster size. Later, if the customer data grows or additional tables are added, the users can again take advantage of Auto Provisioning advisor.
Figure 5 â Comparison of a traditional Provisioning Flow vs Auto Provisioning
The techniques mentioned in this blog help HeatWave achieve good performance both on a single node as well as scale very well across a large number of nodes. Additionally, since HeatWave is optimized for the underlying OCI platform, it is very cost effective. In the next blog we will share performance
technology
via Planet MySQL https://ift.tt/2iO8Ob8
December 15, 2020 at 08:50PM