AnalyticDB for MySQL provides the lake cache feature to cache the frequently accessed Object Storage Service (OSS) objects on high-performance NVMe SSDs to improve the reading efficiency of OSS data. This feature is suitable for scenarios in which a large bandwidth is required to process repeated data reads. For example, multiple users need to query the same copy of data for data analysis. This topic describes the benefits, scenarios, and usage methods of the lake cache feature.
Prerequisites
An AnalyticDB for MySQL Data Lakehouse Edition cluster is created.
Overview
How it works
The following section describes how the lake cache feature works:
When the lake cache client sends a read request to OSS, the read request is forwarded to a lake cache master node to request for the object metadata.
The lake cache master node returns the object metadata to the lake cache client.
The lake cache client uses the object metadata to send a request to lake cache worker nodes for OSS objects.
If the requested objects are stored on the lake cache worker nodes, the worker nodes return the objects to the lake cache client.
If the requested objects are not stored on the lake cache worker nodes, the lake cache feature obtains the objects from OSS. Then, the lake cache feature returns the objects to the lake cache client and stores the objects in the cache space.
Benefits
Millisecond-level latency
The NVMe SSDs used by the lake cache feature can provide a read latency within milliseconds.
Increased throughput
The bandwidth provided by the lake cache feature increases linearly with the cache size. The maximum burst throughput can be hundreds of Gbit/s.
High throughput density
The lake cache feature can provide high throughput for a small amount of data to meet the burst read requirements for a small amount of hot data.
Elastic scaling
The lake cache feature allows you to scale up or down the cache space based on your business requirements to prevent resource wastes and reduce costs. The cache size can be in the range of 10 GB to 10 TB.
Decoupled storage and computing
Compared with the cache space on compute nodes, the lake cache feature separates storage and computing resources and allows you to modify the cache size in the AnalyticDB for MySQL console.
Data consistency
The lake cache feature can automatically identify and cache the updated OSS objects to ensure that the compute engine reads the updated data.
Performance metrics and cache policy
Parameter | Description |
Cache bandwidth | The cache bandwidth is calculated by using the following formula: For example, if you set the cache size to 10 TB for the lake cache feature, the read bandwidth is calculated by using the following formula: (5 × 10) Gbit/s = 50 Gbit/s. |
Cache size | The cache size can be in the range of 10 GB to 10 TB. The lake cache feature provides a bandwidth for the cached data based on the cache size that you specified. Up to 5 Gbit/s of bandwidth is available for 1 TB of the cache size. The bandwidth provided by the lake cache feature is not limited by the standard bandwidth provided by OSS. If your business requires a larger cache size, submit a ticket. |
Cache eviction policy | If the used cache space exceeds the cache size that you specified, the system uses the Least Recently Used (LRU) eviction policy to eliminate cached data. The LRU eviction policy ensures that the frequently accessed data is retained and the infrequently accessed data is preferentially removed. This significantly improves cache utilization. |
Performance testing
In this test, TPC-H queries are executed to check whether the lake cache feature of AnalyticDB for MySQL can improve OSS data access efficiency. Compared with direct access to OSS data, the lake cache feature improves data access efficiency by 2.7 times. The following table describes the test results.
Type | Cache size | Dataset size | Spark resource specifications | Execution duration |
Lake cache feature enabled | 12 TB | 10 TB | 2 cores, 8 GB (medium) | 19,578s |
Direct access to OSS data | None | 10 TB | 2 cores, 8 GB (medium) | 7,219s |
Billing rules
After you enable the lake cache feature, you are charged for the cache storage based on the pay-as-you-go billing method. For more information, see Pricing for Data Lakehouse Edition.
Usage notes
The lake cache feature is supported only in the following regions: China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Shenzhen), Singapore, and US (Virginia).
ImportantIf you want to use the lake cache feature in other regions, submit a ticket.
If a fault occurs on the cache hardware, data queries are not interrupted but slow down. After cached data is pre-prefetched from OSS, the query speed is restored.
When the cache space used by the lake cache feature reaches the upper limit, the lake cache feature replaces the infrequently accessed objects in the cache space with frequently accessed objects. If you do not want to replace the objects stored in the cache space, you can increase the cache size for the lake cache feature.
Enable the lake cache feature
Log on to the AnalyticDB for MySQL console. In the upper-left corner of the console, select a region. In the left-side navigation pane, click Clusters. On the Data Lakehouse Edition tab, find the cluster that you want to manage and click the cluster ID.
In the Configuration Information section of the Cluster Information page, click Configure next to Lake Cache.
In the Lake Cache dialog box, turn on Lake Cache and specify a cache size.
Click OK.
Use the lake cache feature
After you enable the lake cache feature, you can specify the spark.adb.lakecache.enabled
parameter for Spark jobs to accelerate OSS data reads. Sample code:
Spark SQL development
-- Here is just an example of using LakeCache. Modify the content and run your spark program.
SET spark.adb.lakecache.enabled=true;
-- Here are your sql statements
SHOW databases;
Spark JAR development
{
"comments": [
"-- Here is just an example of using LakeCache. Modify the content and run your spark program."
],
"args": ["oss://testBucketName/data/readme.txt"],
"name": "spark-oss-test",
"file": "oss://testBucketName/data/example.py",
"conf": {
"spark.adb.lakecache.enabled": "true"
}
}
If you want to use the lake cache feature together with the XIHE engine, submit a ticket.
View the monitoring information about the lake cache feature
After you enable the lake cache feature, you can check whether the submitted Spark applications use the lake cache feature in the CloudMonitor console. You can also view the monitoring information about the lake cache feature, such as the amount of data that is read from the cache space. Perform the following steps:
Log on to the CloudMonitor console.
In the left-side navigation pane, choose
.Move the pointer over the AnalyticDB for MySQL card and click AnalyticDB for mysql 3.0 - Data Lakehouse Edition.
Find the AnalyticDB for MySQL cluster that you want to monitor and click Monitoring Charts in the Actions column.
Click the LakeCache Metrics tab to view the details about the lake cache metrics.
The following table describes the lake cache metrics.
Metric
Description
LakeCache Cache Hit Ratio(%)
The cache hit ratio. Formula: Number of read requests that hit the cache/Total number of read requests.
LakeCache Cache Usage(B)
The used cache space. Unit: bytes.
Total Amount of Historical Cumulative Read Data of LakeCache(B)
The total amount of data that is read from the cache space. Unit: bytes.