×
Community Blog Introduction to the Intelligent Elasticity Model of AnalyticDB

Introduction to the Intelligent Elasticity Model of AnalyticDB

The article introduces the Intelligent Elasticity Model of AnalyticDB for MySQL, focusing on its features, benefits, and future planning.

By Xiao Chen (Longren)

Preface

With the global economy continuing to slow down, cost reduction and efficiency improvement have gradually become the primary demands for most customers. From the very beginning, Alibaba Cloud AnalyticDB for MySQL, a cloud-native data warehouse, has provided an elastic mode to make database usage more convenient and cost-effective for users. Last year, AnalyticDB for MySQL launched the multi-cluster elasticity solution to optimize the user experience of online resource groups in elastic mode. This year, focusing on job resource groups, we have further proposed the DynamicAllocation elasticity model. It can automatically estimate resource usage in advance, intelligently scale during execution, and optimize elasticity efficiency based on resource hot pools, addressing the problem of slow query startup.

1. Zhang's Confusion

One day, Zhang received a task from his boss to analyze the sales situation of the last month. So he opened the AnalyticDB for MySQL console and was ready to submit an offline query.

After checking the official documentation, Zhang found that he needed to manually specify the amount of resources for the query. However, not being an expert in databases, he was forced to repeatedly consult the architecture introduction and resource specification documents on the official website, eventually cautiously estimating the required resources. After submitting the query with great effort, he suddenly noticed that the query did not work for a long time. He nervously refreshed the page multiple times, and after nearly half a minute, the query finally began to execute.

Half a month later, when asked by his boss why the database expenses last month were much higher than expected, Zhang checked the average resource usage and found it was not particularly high. Confused, he submitted a ticket to the on-duty group.

The above scenario reflects common problems many customers encounter while using offline queries. To enhance customer experience, AnalyticDB for MySQL has launched an intelligent elasticity model and optimized elasticity efficiency, aiming to solve these pain points.

2. Intelligent Elasticity Model

Offline queries are suitable for high-throughput scenarios and are one of the main capabilities of OLAP data warehouses. In AnalyticDB for MySQL, this capability is mainly provided through job resource groups. In job resource groups, a per-query resource scheduling mode is adopted, that is, the control node allocates independent resources to each query for use, and resources are isolated between different queries.

UserDefinedAllocation Model

Earlier job queries, like many other products, took a user-defined resource usage approach. Before executing a query, users had to specify the total resource amount and type to be used by the query. For example, if a user executes a query with a total ACU of 17 cores and each node's specification of 4 cores, then four 4-core compute nodes will be allocated for the query.

1

Note: Each query requires an additional 1-core AppMaster node for plan generation and task scheduling.

DynamicAllocation Model

Automatic Resource Estimation

Automatic resource estimation is also known as autoWLM. It trains a dedicated machine learning model based on a user's past queries and resource usage.

This model is a hybrid model that first determines whether the query matches an existing pattern. If it does, the model predicts resource usage based on historical query data and current table statistics. If not, it estimates resource usage based on table statistics and the query plan parsed by the optimizer.

2

For unknown query patterns, after several executions, the system automatically records the historical pattern information database. Thus, through rapid iterations of execution and training, new patterns become known patterns and follow the left-side estimation path for better estimation accuracy.

After implementing automatic resource estimation, users no longer need to manually specify the resource usage for each query; instead, all resource allocation is estimated by the model.

Intelligent Scaling

Dynamic resource application differs from the traditional elasticity model, where resources must be held until the query completes before they can be released. Instead, it automatically scales resources based on the specified usage - either evaluated by the model or manually specified by the user.

First, it calculates the minimum resource usage (min) and the step size for resource scaling, and the maximum resource usage (max) is the specified resource usage. During execution, it periodically checks the current concurrency and load to automatically determine if scaling is needed. Each scaling operation adjusts resources by step size, keeping the total resource usage within the [min, max] range.

When concurrency or load is low, queries do not require the maximum resource allocation but only a small portion. However, the billing in job mode is based on the actual usage of resources. In this case, the cost of job queries can be effectively reduced.

3

The preceding figure illustrates a scale-in scenario. When conditions meet the criteria for scale-in, a corresponding number of executors are selected based on the step size, and the associated pods are released. Billing for a node stops immediately after its release.

Delayed Scheduling

By default, the job mode uses batch scheduling. To avoid the OOM problem in large queries, queries are forcibly written to disks. This causes the execution efficiency of batch scheduling to be inferior to that of MPP scheduling. For small and medium queries, insufficient memory is less likely to occur, and execution efficiency is a priority, making MPP scheduling a better choice.

Another significant difference between MPP scheduling and batch scheduling is that MPP scheduling aims to maximize resource utilization across nodes by dividing the execution plan according to the number of nodes, thus waiting for all nodes to be ready before starting the scheduling process. Conversely, batch scheduling can start once any single node is ready.

With delayed scheduling enabled, AnalyticDB for MySQL automatically evaluates the resource usage and estimated time of a query to determine if it qualifies as a small or medium query. If so, it switches to MPP scheduling, waiting until all required resources are allocated before initiating the scheduling process.

3. Elasticity Efficiency Optimization

Elasticity efficiency is a topic that cannot be avoided in the serverless field. Even the most precise and intelligent resource scheduling will not yield good results if each scaling operation takes a considerable amount of time. Moreover, for per-query resource scheduling such as job queries, every query needs to request resources after being submitted by the user, a process also influenced by elasticity efficiency. If elasticity efficiency is low, even a simple point query might take a long time, undoubtedly impacting the user experience of AnalyticDB for MySQL.

4

The preceding figure illustrates the lifecycle of a job query within the database kernel after submission, which includes three steps: generating the plan, requesting resources, and executing the query. The key factor that affects elasticity efficiency lies in the step of requesting resources.

Direct Request

5

Previously, the request process involved directly applying for pods from the Kubernetes service. Whenever a query request arrived, the system would apply for the corresponding number of pods based on the specified resource amount, and then start the AnalyticDB for MySQL service on these pods.

Resource Hot Pool

Pooling is a popular concept in computing. It means prefetching a batch of resources so they can be used immediately without additional request time when needed, similar to how thread pools work.

The resource hot pool of AnalyticDB for MySQL also uses the idea of pooling. A resource hot pool is built in each region, in which a batch of executor nodes are prefetched. Multiple versions of resource hot pools are built in parallel to support customers with different instance versions.

6

If a user instance receives a job query, the control node does not directly request resources from the Kubernetes cluster. Instead, it first checks the resource hot pool to see if there are nodes matching the instance version. If such nodes are available, they are used directly from the hot pool. If not, resources are then requested from the Kubernetes cluster.

After successfully using nodes from the cache pool, the system replenishes the corresponding number of nodes to maintain a stable size of the cache pool.

4. Advantages of the New Elasticity Model

Concept Introduction

RT: RunningTime, the execution time of the query.

ACU-hours: ACU (AnalyticDB Compute Unit) is the unit used by AnalyticDB for unified metering and billing. 1 ACU-hour means 1 ACU executes for one hour, representing the duration of resource consumption. The following 1 ACU is 1 Core 2 GB.

Resource Hot Pool

A point query scans only a small amount of data. This type of query has simple execution plans and typically completes within 1 second. Therefore, in job mode, the execution time of such queries is closely related to the time taken to request resources. Point queries can clearly show changes in elasticity efficiency.

7

The figure above shows the execution time of 20 pieces of point queries with and without the resource hot pool enabled. With the resource hot pool enabled, the resource request time decreases by about 30 seconds, and the performance of small and medium queries can be improved by up to 88%.

Automatic Resource Estimation

With automatic resource estimation, users no longer need to manually specify resource usage. After submitting a query, autowlm uses a machine learning model to estimate a reasonable resource amount for the query based on historical query data.

8

The preceding figure compares the resource usage of queries before and after enabling automatic estimation in the TPC-H 1TB dataset. If users directly specify the resources, all queries will use the default 64 ACUs. Conversely, if automatic estimation is used, a slider is provided for user convenience, allowing users to freely decide between cost priority and performance priority. Regardless of the choice, automatic estimation can provide a reasonable resource allocation based on different query loads.

9

In cost priority mode, the total running time (RT) of all queries remains the same as when resources are directly specified, but the total cost (or the total ACU-hours) decreases by about 60.5%. In performance priority mode, the total running time (RT) of all queries decreases by 15%, while the cost can still be saved by 53.2%.

Intelligent Scaling

10

The preceding figure shows the resource usage of a typical query that uses DynamicAllocation intelligent scaling. Initially, the query continuously scales more resources because the actual load consistently utilizes the allocated resources, leading to full-load operation for a period of time. As the query approaches completion, the resource demand decreases due to concurrency limitations from the data source to be written. Therefore, the system automatically scales down the resources until the query finishes.

11

Based on testing with TPC-H 100GB, the above figure shows the ACU consumption. With intelligent scaling enabled, the load of most queries significantly decreases. Compared with before it was enabled, intelligent scaling reduces the total ACU duration of all queries by 30.4%.

5. Summary

Compared with the previous elasticity model, the new intelligent elasticity model has greatly improved in the following three aspects:

Ease of use:

• Automatic resource estimation eliminates the need for users to manually set resource usage.

Cost:

• Intelligent scaling effectively reduces idle load caused by concurrency or data source limitations, improving resource utilization.

• The resource hot pool significantly reduces resource waste and speeds up resource allocation for small and medium queries, effectively lowering RT and thus reducing costs.

Performance:

• The resource hot pool optimizes the previously high resource allocation time, greatly improving the RT for small and medium queries.

• Due to the improvement in resource utilization, even if the cost is reduced, more resources can still be allocated in exchange for performance improvement.

6. Future Planning

In the future, we will continue to refine and enhance the following aspects:

Automatic delivery: Based on load estimation, intelligently distinguish between small and large queries. Small queries will be automatically delivered to online resource groups, while large queries will be delivered to offline resource groups, achieving a balance between cost and performance.

Elasticity efficiency: Improve network connectivity to further reduce startup time.

Result reports: Add a query-level report in the console to help users better understand the execution time and ACU consumption of each query.

Multi-level slider: On top of the existing automatic estimation capabilities, introduce multi-level sliders to allow users to more freely choose the balance between cost and performance, catering to the needs of different customers.

0 1 0
Share on

ApsaraDB

454 posts | 98 followers

You may also like

Comments

ApsaraDB

454 posts | 98 followers

Related Products

  • Hologres

    A real-time data warehouse for serving and analytics which is compatible with PostgreSQL.

    Learn More
  • Database for FinTech Solution

    Leverage cloud-native database solutions dedicated for FinTech.

    Learn More
  • Lindorm

    Lindorm is an elastic cloud-native database service that supports multiple data models. It is capable of processing various types of data and is compatible with multiple database engine, such as Apache HBase®, Apache Cassandra®, and OpenTSDB.

    Learn More
  • Oracle Database Migration Solution

    Migrate your legacy Oracle databases to Alibaba Cloud to save on long-term costs and take advantage of improved scalability, reliability, robust security, high performance, and cloud-native features.

    Learn More