All Products
Search
Document Center

AnalyticDB:Release notes

Last Updated:Dec 23, 2024

This topic describes the release notes for AnalyticDB for MySQL and provides links to the relevant references.

Usage notes

Take note of the following items during minor version updates of AnalyticDB for MySQL clusters:

  • For AnalyticDB for MySQL clusters in reserved mode for Cluster Edition or AnalyticDB for MySQL clusters in elastic mode for Cluster Edition that have 32 cores or more, data read and write operations are not interrupted when engine versions are updated. Within 5 minutes before the update is complete, queries may encounter transient connections.

  • For AnalyticDB for MySQL clusters in elastic mode for Cluster Edition that have 8 or 16 cores, data write operations may be interrupted for 30 minutes when engine versions are updated. Within 5 minutes before the update is complete, queries may encounter transient connections.

  • Minor version updates of AnalyticDB for MySQL clusters do not affect database access, account management, database management, or IP address whitelist settings.

  • During a minor version update of an AnalyticDB for MySQL cluster, network jitters may occur and affect write and query operations. Make sure that your application is configured to automatically reconnect to the AnalyticDB for MySQL cluster.

  • During a minor version update of an AnalyticDB for MySQL cluster, the cluster may encounter transient connections. Make sure that your application is configured to automatically reconnect to the AnalyticDB for MySQL cluster.

If you do not need to update the minor version of your AnalyticDB for MySQL cluster or an error occurs during the update process, you can cancel the scheduled minor version update. You can cancel only the scheduled events of a minor version update. For more information, see the "Cancel scheduled events" section of the Manage O&M events topic.

Warning

If the minor version of your AnalyticDB for MySQL cluster is earlier than the latest minor version, Alibaba Cloud pushes a notification at an irregular interval to inform you that the cluster needs to be updated to the latest minor version. We recommend that you update the minor version of your AnalyticDB for MySQL cluster at the earliest opportunity within six months after you receive the notification. Otherwise, you shall assume all liabilities for risks such as service interruptions and data loss.

December 2024

V3.2.4

Category

Feature

Description

References

New feature

Machine learning prediction by using SQL

SQL can be used to quickly deploy behavior sequence transformer (BST) models to execute machine learning jobs, implementing data preprocessing, training, and prediction.

Use SQL to implement machine learning prediction

November 2024

Category

Feature

Description

References

New feature

Lake cache

The lake cache feature is supported to cache the Object Storage Service (OSS) objects that are frequently accessed on high-performance NVMe SSDs to improve the reading efficiency of OSS data.

Lake cache

October 2024

Category

Feature

Description

References

New feature

Backup and restoration

Data backup sets can be deleted and the data backup feature can be disabled in the AnalyticDB for MySQL console.

Manage backups

Zero-ETL

The zero-ETL feature is supported to synchronize Lindorm data. You can create data synchronization tasks from Lindorm to AnalyticDB for MySQL to synchronize and manage data in an end-to-end manner and integrate transaction processing with data analysis.

Import data from Lindorm

September 2024

Data Lakehouse Edition

Category

Feature

Description

References

New feature

Cross-region cluster cloning

Clusters can be cloned across regions.

Clone a cluster

V3.2.2

Category

Feature

Description

References

New feature

Batch creation of MaxCompute external tables

Multiple MaxCompute external tables can be created at a time.

IMPORT FOREIGN SCHEMA

Support for aggregate functions in configuring incremental refresh for materialized views

The MAX(), MIN(), APPROX_DISTINCT(), COUNT(DISTINCT), and AVG() functions can be included in the QUERY BODY parameter when you configure incremental refresh for materialized views.

Configure incremental refresh for materialized views (preview)

Access to MaxCompute external tables in arrow API mode

The arrow API mode is supported to read and write MaxCompute external tables. Compared with the traditional tunnel mode, the arrow API mode can improve data access and processing efficiency.

Use external tables to import data to Data Lakehouse Edition

Optimized feature

FROM_UNIXTIME function

The FROM_UNIXTIME function can be used to convert a UNIX timestamp to the DATETIME format.

Date and time functions

August 2024

Category

Feature

Description

References

New feature

Selection of the Spark engine for interactive resource groups

The Spark engine can be selected when you create an interactive resource group in an AnalyticDB for MySQL Data Lakehouse Edition cluster. You can run only Spark jobs in interactive resource groups by using the Spark engine. Spark jobs are run in an interactive manner.

Create a resource group

Limits on the number of zero-ETL tasks

The number of zero-ETL tasks from ApsaraDB RDS for MySQL or PolarDB for MySQL to AnalyticDB for MySQL is limited.

July 2024

Category

Feature

Description

References

New feature

Release of the desired state of Basic Edition

The desired state of AnalyticDB for MySQL Basic Edition is released. The desired state of Basic Edition runs in single-replica mode and provides the same features as Enterprise Edition. Basic Edition uses a single-replica storage architecture and does not support high availability. Basic Edition is suitable for business scenarios that require low-cost hot data storage but do not require high availability.

Editions

V3.2.1

Category

Feature

Description

References

New feature

Next-generation storage engine

The next-generation storage engine XUANWU_V2 is launched by AnalyticDB for MySQL. The engine caches cold data to the Enterprise SSDs (ESSDs) to speed up data reading and provides the next-generation column-oriented storage that supports higher I/O concurrency and reduces the memory usage. The engine also allows you to enable the compaction service to perform local compaction operations in an independent process by using an independent resource pool. This reduces resource usage and improves service stability.

XUANWU_V2 engine

Incremental refresh for multi-table materialized views

Incremental refresh is supported for multi-table materialized views. The incremental data of multiple tables that are joined together can be automatically refreshed to the corresponding multi-table materialized view. This improves data query performance and data analysis efficiency.

Configure incremental refresh for materialized views (preview)

Invocation of user-defined functions (UDFs) by using the REMOTE_CALL() function

The REMOTE_CALL() function can be used to invoke custom functions that you create in Function Compute (FC). This way, you can use UDFs in AnalyticDB for MySQL.

UDFs

Forcible deletion of databases

The CASCADE keyword is supported in the DROP DATABASE statement to forcibly delete a database, including all tables in the database.

DROP DATABASE

Wide table engine

The wide table engine is supported for Data Lakehouse Edition. The wide table engine is compatible with the capabilities and syntax of the open source columnar database ClickHouse and can handle large amounts of columnar data.

Wide table engine

Path analysis functions

The SEQUENCE_MATCH() and SEQUENCE_COUNT() functions are supported to analyze user behavior and check whether the user behavior matches the specified pattern.

Path analysis functions

SSL encryption

SSL encryption is supported to encrypt data transmitted between a Data Warehouse Edition cluster and a client. This prevents data from being listened to, intercepted, and tampered with by third parties.

SSL encryption

Support for complex MaxCompute data types by MaxCompute external tables

Complex MaxCompute data types, such as ARRAY, MAP, and STRUCT, are supported for MaxCompute external tables of Data Lakehouse Edition clusters.

CREATE EXTERNAL TABLE

Subscription to AnalyticDB for MySQL binary logs by using Flink

Realtime Compute for Apache Flink can be used to consume AnalyticDB for MySQL binary logs in real time.

Use Realtime Compute for Apache Flink to subscribe to AnalyticDB for MySQL binary logs

Support for the ROARING BITMAP type by AnalyticDB for MySQL internal tables

The ROARING BITMAP type is supported.

Roaring bitmap functions

Optimized feature

Change of LIFECYCLE from a required keyword to an optional one

If you do not specify the LIFECYCLE keyword when you create a table, partition data is permanently retained.

CREATE TABLE

Table-level partition lifecycle management

For AnalyticDB for MySQL clusters of V3.2.1.1 or later, the partition lifecycle is managed at the table level, but not the shard level. The LIFECYCLE n parameter specifies that up to n partitions can be retained in each table.

CREATE TABLE

Import of OSS data to AnalyticDB for MySQL by using external tables

The absolute path name and the asterisk (*) wildcard are supported for the url parameter when you use external tables to import OSS data to AnalyticDB for MySQL.

Use external tables to import data to Data Warehouse Edition

Automatic validity check of column names at table creation

Column names are automatically checked against naming conventions of AnalyticDB for MySQL when you execute the CREATE TABLE statement to create a table. If a column name does not meet the naming conventions, an error is returned. For information about the naming conventions of column names, see the "Naming limits" section of the Limits topic.

None

June 2024

Category

Feature

References

New feature

AnalyticDB for MySQL Enterprise Edition and Basic Edition are released.

  • Enterprise Edition runs in a cluster mode and is an integrated edition of Data Lakehouse Edition and Data Warehouse Edition that provides the same features as Data Lakehouse Edition. Enterprise Edition supports capabilities in elastic mode, such as resource group isolation, elastic resource scaling, and tiered storage of hot and cold data. Enterprise Edition also supports capabilities in reserved mode, such as high throughput, real-time writes and high-concurrency, real-time queries.

  • Basic Edition runs in standalone mode and supports tiered storage of hot and cold data. Basic Edition does not provide distributed capabilities, high availability, resource group isolation, or scheduled scaling. You cannot change a cluster from Basic Edition to Enterprise Edition.

Editions

April 2024

Category

Feature

Description

References

New feature

Query rewrite

The query rewrite feature of materialized views is supported. After you enable this feature, the optimizer determines whether a query can use pre-computed and stored data in materialized views. This way, the optimizer partially or entirely rewrites the original query to a query that can use materialized views.

Query rewrite of materialized views

Synchronization of Simple Log Service (SLS) data by using data synchronization

The data synchronization feature can be used to synchronize data in real time from an SLS Logstore to an AnalyticDB for MySQL cluster based on a specific offset. This helps meet your business requirements for real-time analysis of log data.

Zero-ETL

The zero-ETL feature is supported to help you synchronize and manage data, integrate transaction processing with data analysis, and focus on data analysis. You can create data synchronization tasks from ApsaraDB RDS for MySQL or PolarDB for MySQL to AnalyticDB for MySQL.

Use zero-ETL to synchronize data

Time zone selection at cluster creation

The time zone parameter can be selected for an AnalyticDB for MySQL cluster at cluster creation based on your business requirements. After you select a time zone, the system performs time-related data writes based on the selected time zone.

Create a cluster

Self-service minor version update

The minor version of a Data Warehouse Edition cluster can be viewed and updated in the AnalyticDB for MySQL console.

Update the minor version of a cluster

Vertical scaling of reserved storage resource specifications

Reserved storage resource specifications can be scaled up or down for Data Lakehouse Edition clusters.

Scale a Data Lakehouse Edition cluster

Use of a Spark distributed SQL engine in DataWorks

A Spark distributed SQL engine of AnalyticDB for MySQL Data Lakehouse Edition can be registered as an execution engine by registering Cloudera's Distribution Including Apache Hadoop (CDH) clusters to DataWorks. This way, you can develop and run Spark SQL jobs in DataWorks.

Use a Spark distributed SQL engine in DataWorks

Display of the progress bar in creation or configuration change of a cluster

A progress bar is displayed when you create or change the configurations of a Data Warehouse Edition cluster.

Create a cluster

March 2024

Data Lakehouse Edition

Category

Feature

Description

References

New feature

Spot instance

The spot instance feature can be enabled for job resource groups in Data Lakehouse Edition clusters. After you enable the spot instance feature for a job resource group, Spark jobs that run in the resource group attempt to use the spot instance resources. Compared with AnalyticDB compute unit (ACU) elastic resources, spot instance resources help you significantly reduce the costs of Spark jobs.

Spot instances

February 2024

Category

Feature

Description

References

New feature

Intelligent assistant

An intelligent assistant is provided in the AnalyticDB for MySQL console. The intelligent assistant answers your questions and helps you quickly resolve issues.

Note

The intelligent assistant supports only the Chinese language.

None

Spark distributed SQL engines

AnalyticDB for MySQL Data Lakehouse Edition Spark provides managed services for open source Spark distributed SQL engines to develop Spark SQL jobs. This helps you easily analyze, process, and query data to improve SQL efficiency.

Use a Spark distributed SQL engine to develop Spark SQL jobs

Access to OSS-HDFS

AnalyticDB for MySQL Data Lakehouse Edition Spark can be used to access OSS-HDFS.

Access OSS-HDFS

Storage overview

The data size of a cluster or a table can be viewed on the Storage Overview page of the AnalyticDB for MySQL console.

Storage analysis

V3.1.10

Category

Feature

Description

References

New feature

Primary and foreign key constraints

Primary and foreign key constraints can be used to eliminate unnecessary joins to improve database query performance.

Use primary and foreign key constraints to eliminate unnecessary joins

Monthly execution of resource scaling plans

Resource scaling plans can be configured to execute every month in Data Warehouse Edition.

Create a resource scaling plan

Multi-cluster scaling models

The multi-cluster feature can be enabled for resource groups in Data Lakehouse Edition. A multi-cluster scaling model allows AnalyticDB for MySQL to automatically scale resources based on query loads to meet resource isolation and high concurrency requirements for resource groups.

Multi-cluster scaling models

Variable-length binary functions

The AES_DECRYPT_MY() and AES_ENCRYPT_MY() functions are supported.

Variable-length binary functions

JSON functions

The JSON_REMOVE() function is supported.

JSON functions

Plan cache

The plan cache feature is supported to allow you to cache execution plans of SQL statements. When you execute SQL statements that share the same SQL pattern, AnalyticDB for MySQL uses the cached execution plan of the SQL pattern to accelerate SQL compilation optimization and improve query performance.

Plan cache

Elastic import

The elastic data import method is supported for Data Lakehouse Edition. Elastic import consumes a small amount of storage resources or does not consume computing and storage resources. This reduces impacts on real-time data reads and writes and improves resource isolation.

Data import methods

Asynchronous scheduling of extract, transform, load (ETL) tasks by using Data Management (DMS)

The task orchestration feature of DMS can be used to asynchronously schedule ETL tasks.

None

Modification of workload management rules

The WLM syntax can be used to modify workload management rules.

WLM

Optimized feature

Basic statistics

The collection policy for basic statistics is optimized.

None

Column group statistics

The collection policy for column group statistics is optimized.

None

Internal Error error message

The Internal Error error messages is optimized to help you quickly identify issues.

None

Asynchronous generation of splits

For external tables that have large amounts of data, AnalyticDB for MySQL can asynchronously generate splits to reduce the amount of time required to generate execution plans.

None

Split flow control

The split flow control feature for scanning OSS and MaxCompute external tables is optimized.

None

Parameter check policy for making RC HTTP calls

The parameter check policy for making RC HTTP calls is optimized to prevent SQL injections.

None

Memory usage of storage nodes

The memory usage of storage nodes is optimized to reduce garbage collection (GC) frequency and improve system stability.

None

Fixed issue

Materialized views

The following issue is fixed: An error is returned for the ARRAY_AGG() function when you use the CREATE VIEW statement to create a view.

None

On-premises data import by using the LOAD DATA statement

The following issue is fixed: When you use the LOAD DATA statement to import on-premises data to Data Warehouse Edition, CSV files are incompatible or data is disordered.

None

Storage of cold data

The cold data storage issue is fixed to improve the query hit ratio and query performance.

None

November 2023

Data Warehouse Edition

Category

Feature

Description

References

New feature

Diagnostics

The diagnostics feature is supported. This feature allows you to diagnose the running status of clusters within a specific period of time. AnalyticDB for MySQL performs joint analysis based on monitoring data, log data, and the status of databases and tables. AnalyticDB for MySQL evaluates the health status of clusters from multiple aspects, such as resource usage, workload changes, SQL queries, operators, and storage, to help you efficiently identify and resolve issues.

Diagnostics

Change of virtual private clouds (VPCs) and vSwitches

VPCs and vSwitches can be changed.

Change the VPC and vSwitch of a cluster

Data Lakehouse Edition

Category

Feature

Description

References

New feature

Custom Spark images

Custom Spark images are supported. If the default image of AnalyticDB for MySQL Spark cannot meet your business requirements, you can add the software packages and dependencies required for Spark jobs to the default image to create a custom image. When you develop Spark jobs, you can specify the custom image as the execution environment.

Custom Spark images

Development of interactive Jupyter jobs

A Docker image can be used to start the interactive JupyterLab development environment. This environment helps you connect to AnalyticDB for MySQL Spark and perform interactive testing and computing based on elastic resources.

Develop an interactive Jupyter job

October 2023

V3.1.9

Category

Feature

Description

References

New feature

Common table expression (CTE) execution optimization

If a CTE subquery is referenced repeatedly, the subquery can be executed only once to improve query performance. By default, this feature is disabled. You can enable this feature by using the cte_execution_mode parameter.

WITH

Access to Hudi data by using XIHE SQL

XIHE SQL can be used to access Hudi data from Data Lakehouse Edition.

OSS external tables

MV_PROPERTIES configuration

An elastic resource group can be specified to create and refresh materialized views to improve query efficiency.

Elastic materialized views

Column group statistics

The statistics on multiple columns of a table can be collected to describe how these columns correlate with each other.

Statistics

Manual connection of partition statistics

The ANALYZE TABLE statement can be used for Data Lakehouse Edition to collect statistics on partitions of OSS external tables.

Collect statistics on partitions

Variable-length binary functions

The ZIP(), UNZIP(), GZIP(), and GUNZIP() functions are supported.

Variable-length binary functions

Incremental refresh for materialized views

Incremental refresh can be configured when you create a materialized view.

Configure incremental refresh for materialized views (preview)

Forcible overwriting of existing properties of workload management rules

After you use the WLM syntax to create a workload management rule, existing properties of the rule can be forcibly overwritten.

WLM

AI_GENERATE_TEXT() function

The AI_GENERATE_TEXT() function can be used to analyze unstructured data and generate structured data in Data Warehouse Edition.

None

Multi-statement

Multiple SQL statements that are separated by semicolons (;) can be consecutively executed. By default, the multi-statement feature is disabled. You can execute the SET ADB_CONFIG ALLOW_MULTI_QUERIES=true; statement to enable the feature.

None

Statistics on Hive external tables

The number of rows in ORC external tables can be collected in real time to optimize complex queries of ORC external tables.

None

Optimized feature

JOIN optimization

The filter scenarios and data transfer efficiency are optimized when a hash join is used to join tables. A small table can be used in a subquery and efficiently filtered to transfer data that meets requirements to the main query.

None

Optimization of vectorized reading of Parquet files

The query efficiency of Parquet files is improved.

None

Optimization of Aggregation operators

The execution efficiency of Aggregation operators is optimized in GROUP BY scenarios that use a STRING-type column or multiple columns.

None

Optimization of dictionary encoding

Dictionary encoding is used to improve the performance of GROUP BY operations.

None

Analyzer optimization

The method of specifying custom dictionaries for the IK analyzer is optimized.

None

Optimization of executor nodes

The startup speed of executor nodes in job resource groups is increased.

None

Optimization of INSERT OVERWRITE

An external table can be used in multiple INSERT OVERWRITE operations at the same time.

None

Optimization of asynchronous jobs

The maximum length of the result set of asynchronous queries is increased.

None

Fixed issue

Precision of the DECIMAL type

The following issue is fixed: Row-oriented engines do not support the precision change of the DECIMAL type.

None

Statistics on Hive external tables

The following issue is fixed: An extended amount of time is required to collect information about Hive external tables.

None

WITH

The following issue is fixed: The table alias that is enclosed in grave accents (``) cannot be identified in the WITH clause.

None

File names of external tables

The following issue is fixed: An error message is returned if the file name of an external table contains a colon (:).

None

September 2023

Data Lakehouse Edition

Category

Feature

Description

References

New feature

Spark application performance diagnostics

The Spark application performance diagnostics feature helps you quickly locate and analyze performance bottlenecks to resolve issues.

Spark application performance diagnostics

Public network configuration for Spark application access

A public network can be configured for Spark applications to access self-managed databases or third-party cloud services.

Public network configuration for Spark application access

Access to MySQL data by using Spark SQL

Spark SQL can be used to access self-managed MySQL databases or Alibaba Cloud MySQL databases.

Read MySQL data

Access to Lindorm data by using Spark SQL

Spark SQL can be used to access Hive tables and wide tables of Lindorm.

Read Lindorm data

June 2023

Category

Feature

Description

References

New feature

Resource overview and job usage statistics

The following information about Data Lakehouse Edition cluster resources is displayed in the AnalyticDB for MySQL console:

  • Cluster information: the amounts of computing and storage resources, including the reserved and elastic resources.

  • Resource group information: the total amount of computing resources, the amount of reserved computing resources, and the maximum amount of computing resources.

  • Job information: the total amount of computing resources, the amount of reserved computing resources, and the amount of elastic computing resources.

View monitoring information about resource groups

Optimized feature

Change of the default data backup cycle

The default data backup cycle of Data Warehouse Edition clusters is changed from at least twice a week to at least once a week.

Manage backups

May 2023

V3.1.7 to V3.1.8

Category

Feature

Description

References

New feature

Improved monitoring and alerting

The instance health status and cluster health status metrics are supported.

View monitoring information of AnalyticDB for MySQL

Priority queues and concurrency control of interactive resource groups

The priority queue feature is supported for queries in interactive resource groups. You can configure query priorities to allow queries to enter one of the following priority queues: LOWEST, LOW, NORMAL, and HIGH. You can also configure the number of concurrent queries for queues.

Priority queue and concurrency of interactive resource groups

Priority queues of job resource groups

The priority queue feature is supported for jobs in job resource groups. You can configure job priorities to allow jobs to enter one of the following priority queues: LOWEST, LOW, NORMAL, and HIGH. Jobs that have higher priorities are preferentially run.

Priority queues of job resource groups

Precision change of the DECIMAL type

The precision of the DECIMAL type can be changed from low to high.

ALTER TABLE

Change of the data type

You can change an integer type such as TINYINT, SMALLINT, INT, BIGINT, SHORT, and LONG to a floating-point type such as FLOAT and DOUBLE, or the DECIMAL type.

ALTER TABLE PARTITION

The partition function of a table can be changed.

Optimized feature

  • Optimizer optimization:

    • Eager aggregation and automatic two-phase aggregation rules are supported.

    • By default, the Cascades optimizer is enabled.

    • The Swap Outer Join rule is supported.

  • The executor restart or upgrade upon a task failure does not affect the running of tasks.

  • The write performance of the INSERT OVERWRITE statement is improved.

  • The restart of storage nodes is accelerated.

  • JSON optimization:

    • The issue about the IS NOT NULL or IS NULL operator of the JSON_EXTRACT() function is fixed.

    • The performance compromise issue of the C-Store storage engine due to pushdown failure of the JSON_ARRAY() function is fixed.

  • The performance of data scan operators is improved.

None

April 2023

Data Lakehouse Edition

Category

Feature

References

New feature

ACU-hour plans can be purchased to offset the amount of reserved computing resources, reserved storage resources, and elastic resources of pay-as-you-go clusters, and elastic resources of subscription clusters.

ACU-hour plans

February 2023

V3.1.6.4

Category

Feature

Description

References

New feature

Roaring bitmap functions

Roaring bitmaps are efficiently compressed bitmaps that are widely used in various programming languages and big data platforms for deduplication, tag-based filtering, and computing of time series data.

Roaring bitmap functions

Funnel analysis functions

Funnel analysis is a common type of conversion analysis. It is used to reflect the conversion rates of user behavior in various stages of a process. The following functions are supported: WINDOW_FUNNEL(), RETENTION(), RANGE_RETENTION_COUNT(), and RANGE_RETENTION_SUM().

Funnel and retention functions

UPDATE JOIN

The UPDATE statement can be used together with JOIN to update the data of multiple tables.

Update multiple tables

ApsaraDB RDS for MySQL, ApsaraDB for MongoDB, MaxCompute, OSS, and Tablestore external tables

  • External tables can be used to import data from ApsaraDB RDS for MySQL or ApsaraDB for MongoDB to AnalyticDB for MySQL Data Lakehouse Edition.

  • External tables can be used to import data from MaxCompute to AnalyticDB for MySQL Data Lakehouse Edition.

  • For Data Lakehouse Edition clusters, partitioned OSS external tables provide partition mapping and support a variety of formats such as CSV, JSON, Parquet, ORC, and Avro. The formats can be used to import data that is stored in OSS and contains more than 100,000 partitions.

  • The Tablestore connector is supported to import data from Tablestore.

Character sets supported for MySQL external tables

MySQL character sets can be specified by using the charset parameter when you create an ApsaraDB RDS for MySQL or self-managed MySQL external table.

CREATE EXTERNAL TABLE

Cost-based optimizer (CBO) update

The automatic statistics collection feature is supported in Data Warehouse Edition. Statistics about data columns can be used to help the query optimizer generate high-quality execution plans.

Statistics

Intelligent workload management

The workload management feature separates and throttles queries by assigning them to queues of different priorities. You can customize rules to intercept bad queries and assign queries to queues.

Optimized feature

  • New import models are supported to improve import performance for newly created tables.

  • The hash and sort performance of Window operators is optimized to support adaptive aggregation.

  • Jetty is replaced with Netty to reduce network connections and CPU consumption.

    By default, Netty is enabled for control links. You can enable Netty for data links.

  • The types and value ranges of partition fields can be specified in DDL statements. This optimizes the partition pruning performance of OSS external tables.

  • The memory model used for importing data based on OSS and MaxCompute external tables is optimized. The performance of reading external tables is also optimized.

  • CacheFS is optimized to reduce caching for hot and cold data and improve stability.

None

V3.1.5.8

Category

Feature

Description

References

New feature

Full-text search

The following analyzers are built into AnalyticDB for MySQL to implement full-text search: Standard, Ngram, Edge_ngram, and Pattern.

Analyzers for full-text indexes

V3.1.5.10

Category

Feature

Description

References

New feature

Regular expression functions

The following regular expression functions are supported: REGEXP_MATCHES(), REGEXP_SUBSTR(), REGEXP_INSTR(), and REGEXP_REPLACE().

Regular functions

January 2023

Data Warehouse Edition

Category

Feature

References

New feature

The SQL diagnostics feature is supported. This feature allows you to view stage and task details to improve analysis efficiency of slow queries.

Use stage and task details to analyze queries

The performance level of ESSDs can be changed.

Data Lakehouse Edition

Category

Feature

References

New feature

AnalyticDB for MySQL Data Lakehouse Edition is available. Besides the real-time analysis capability of Data Warehouse Edition, Data Lakehouse Edition provides the batch processing capability.

November 2022

Data Warehouse Edition

Category

Feature

References

New feature

AnalyticDB for MySQL Data Warehouse Edition is available in the Philippines (Manila) and Thailand (Bangkok) regions.

Pricing for Data Warehouse Edition

August 2022

V3.1.5.0

Category

Feature

Description

References

New feature

Enhancement of the DECIMAL type

Decimal numbers can be converted to a higher precision, and variable-length decimal numbers are supported. This feature improves the I/O efficiency of DECIMAL data.

None

Table-level throttling

The writing rate of DML statements is limited for specific tables to ensure overall performance. By default, table-level throttling is disabled.

None

Memory management on wide tables

The memory management on wide tables is optimized to reduce the consumption of memory resources.

None

JSON_UNQUOTE() function

The JSON_UNQUOTE() function can be used to unquote the value specified by json_value, escape specific characters in json_value, and then return the processing result.

JSON_UNQUOTE

JSON_CONTAINS() function

The JSON_CONTAINS() function can be used to determine whether a given candidate is contained within a JSON document or whether the candidate exists in a specified path within the JSON document.

JSON_CONTAINS

JSON_CONTAINS_PATH() function

The JSON_CONTAINS_PATH() function can be used to determine whether a specified path exists in the given JSON document.

JSON_CONTAINS_PATH

Optimized feature

  • TopN operator: In scenarios where the ROW_NUMBER() function is used to sort data and the WHERE clause is used to implement semantics of TopN, a variety of measures such as sorting, condition pushdown, and pre-aggregation can be taken to reduce the amount of data that is involved in computing and transmission. The performance of TopN operators can be improved tenfold while memory consumption is reduced by 90%.

  • Window operator: General-use window functions are optimized to improve performance by three to seven times. Different algorithms are used based on different data features such as data aggregation degree and sequence ordering.

  • Partial Agg operator: Self-adaption capabilities of two-phase aggregation are improved. Data features of data aggregation are dynamically collected during execution to determine whether pre-aggregation is required. The performance of aggregation operators can be improved by two to four times if no aggregation degree exists.

  • System logs: When the disk usage exceeds the threshold, logs can be automatically cleared.

None

March 2022

Category

Feature

Description

References

New feature

Schema optimization

The schema optimization feature is supported to provide optimization suggestions of the hot and cold data optimization, index optimization, and distribution key optimization types based on intelligent statistic analysis. This feature can help reduce costs and improve efficiency for the use of AnalyticDB for MySQL clusters.

Schema optimization

December 2021

V3.1.4.13 to V3.1.4.16

Category

Feature

References

New feature

Two data replicas and one log replica are configured based on the Raft algorithm to ensure data reliability and reduce storage overheads.

None

High availability is supported for the nameservice of Apsara File Storage for HDFS when data is exported to HDFS.

Export data to Apsara File Storage for HDFS

Optimized feature

  • The lock granularity for CREATE VIEW is optimized to improve concurrency performance.

  • The upload speed of full backup files is increased during specification changes.

  • The index creation performance for complex data types is improved.

  • The memory management of background tasks is optimized.

None

September 2021

V3.1.4.12

Category

Feature

References

Optimized feature

The performance of the hash join algorithm to create a hash table is improved.

None

August 2021

V3.1.4.11

Category

Feature

Description

References

New feature

API operations related to cluster running reports

API operations can be called to query metrics in a cluster running report.

Optimized feature

  • The download speed of full data is increased.

  • Inappropriate index condition pushdown (ICP) is blocked. By default, 128 indexes are not pushed down. This configuration can be changed.

None

V3.1.4.10

Category

Feature

Description

References

New feature

O&M event management

The database upgrade time can be viewed and adjusted in the AnalyticDB for MySQL console.

Manage O&M events

Optimized feature

  • The TIME() function can be pushed down.

  • The table scan performance is improved.

  • Mutual exclusion and fair scheduling are supported for data import.

None

July 2021

V3.1.4.9

Category

Feature

Description

References

New feature

Data import to and export from Apsara File Storage for HDFS by using external tables

External tables can be used to import Apsara File Storage for HDFS data to AnalyticDB for MySQL and export AnalyticDB for MySQL data to Apsara File Storage for HDFS.

SQL diagnostics

The details of SQL queries can be viewed and filtered based on categories such as the top 100 most time-consuming queries and queries that failed to be executed. Also, SQL queries can be optimized based on diagnostic results and optimization suggestions.

Overview

End-to-end data management

An end-to-end data management portal is added to the AnalyticDB for MySQL console. Data assets can be managed and jobs can be developed and scheduled by using DMS.

Manage data assets

Schedule XIHE SQL jobs by using DMS

Custom analyzers and dictionaries in full-text search scenarios

Custom analyzers and dictionaries can be configured in full-text search scenarios.

Analyzers for full-text indexes

Optimized feature

  • Connections to frontend nodes are optimized from single-threaded connections to multi-threaded connections. This improves write performance in a linear manner.

  • Performance is improved when the TRUNCATE TABLE statement is frequently executed.

  • By default, the REPLACE INTO statement is atomic to prevent sudden changes to data.

None

March 2021

V3.1.1.9 to V3.1.3.9

Category

Feature

Description

References

New feature

Computing resource grouping

Computing resources of AnalyticDB for MySQL clusters in elastic mode can be divided into resource groups for isolation.

Create a resource group

Tiered storage of hot and cold data

Table data of AnalyticDB for MySQL clusters in elastic mode can be defined as hot or cold data. You can switch between hot and cold storage.

Tiered storage of hot and cold data

Cluster mode change

AnalyticDB for MySQL clusters can be changed from reserved mode to elastic mode.

None

Compatibility with time formats in AnalyticDB for MySQL

Time formats in AnalyticDB for MySQL V2.0 are supported. Example: 2020-08-03T23:59:59.

None

Index creation or deletion for JSON fields by executing the ALTER TABLE statement

Indexes for JSON fields can be disabled by executing the ALTER TABLE statement.

ALTER TABLE

BINARY type

The BINARY type is supported for the metadata of the protocol layer.

None

Export of file headers during export from AnalyticDB for MySQL to a single OSS object

File headers can be exported when you export data from AnalyticDB for MySQL to a single OSS object by using an external table.

Export data to OSS

Maximum number of rows that can be generated in an object when you export data from AnalyticDB for MySQL to OSS by using an external table

If the number of exported rows exceeds the maximum number, extra rows are exported to one or more new objects. You can specify both the maximum size and maximum number of rows in an object. Written data that first triggers the limit is exported to a new object.

None

SQL plan module

Execution plans of slow SQL queries can be viewed in the AnalyticDB for MySQL console.

Query process and execution plan

INSERT INTO SELECT ON DUPLICATE KEY UPDATE

This query is supported when the input values in the UPDATE column are constants or when the input values in the UPDATE column are those in the SELECT column.

None

File format of OSS external tables

The ORC format is supported for OSS external tables.

None

Priority of the BATCH LOAD statement

Hints can be used to specify the priority of the BATCH LOAD statement.

None

Optimized feature

Performance of the LIMIT n clause

Performance is improved when you use the pushdown logic of the LIMIT n clause to filter data.

None

Compatibility

The table creation statement is compatible with the BOOLEAN type.

None

Database naming conventions

Database names can start with an uppercase letter or underscore (_).

None

July 2020

V3.1.1.6

Category

Feature

Description

References

New feature

Timestamp and Datetime columns

When the MODIFY COLUMN statement is executed to modify a Timestamp or Datetime column, the ON UPDATE CURRENT_TIMESTAMP clause is supported.

None

Table and column naming conventions

Table and column names support Chinese characters.

None

Requirements for creating an OSS external table

The following requirements must be met when you create an OSS external table:

  • The partition key columns must be specified at the end of the field list. Otherwise, the table cannot be created.

  • The first row in an OSS object can be set as a file header. The system skips the first row when the system reads data from the object.

Use external tables to import data to Data Warehouse Edition

CREATE TABLE AS SELECT

The CREATE TABLE AS SELECT statement can be executed to create a table.

CREATE TABLE AS SELECT (CTAS)

Optimized feature

Fields of the BOOLEAN type

The default values for the fields of the BOOLEAN type can be 0 or 1.

None

SHOW DATABASES

The permissions to list databases can be granted when the SHOW DATABASES statement is executed.

None

April 2020

V3.0.9.6

The following database software upgrades are performed for users of AnalyticDB for MySQL Basic Edition to improve service quality.

Category

Feature

Description

References

New feature

Geometry functions

Geometry functions are supported.

Operation functions

JSON_EXTRACT() function

The JSON_EXTRACT() function is supported.

JSON_EXTRACT

INSERT INTO VALUES(FROM_UNIXTIME(...))

The INSERT INTO VALUES(FROM_UNIXTIME(...)) statement is supported.

None

Nested-loop join (NLJ)

NLJ is supported for data join.

None

Power BI connection

Power BI can be connected to the protocol layer.

None

Database naming conventions

Hyphens (-) can be included in database names.

Note

Hyphens (-) must be enclosed in grave accents (``).

None

Optimized feature

Zero dates

Zero dates (0000-00-00) are converted to NULL.

None

DIV() function of the DECIMAL type

The DIV() function of the DECIMAL type is supported as in MySQL.

DIV

CAST() function of the JSON type

The CAST() function is supported for JSON data as in MySQL and Apache Hive.

JSON

Slow query logging threshold

The slow query logging threshold is set to 1 second.

None

March 2020

V3.0.9

Category

Feature

Description

References

New feature

JSON data types and related JSON functions

Complex JSON data types and related JSON functions are supported.

SELECT NOW()

The SELECT NOW() statement is supported.

None

Optimized feature

Maximum number of tables

The maximum number of tables that can be created in a Cluster Edition cluster of the minimum specifications is increased from 512 to 800. The minimum specifications indicate that the Cluster Edition cluster has only two node groups.

None

Compatibility with DDL statements

Compatibility with DDL statements in AnalyticDB for MySQL V2.0 is improved to enable smooth data migration to AnalyticDB for MySQL V3.0 clusters. Your business is not affected during data migration.

None

Compatibility with business intelligence (BI) tools

AnalyticDB for MySQL V3.0 improves compatibility with BI tools and is fully compatible with Power BI.

Power BI

February 2020

V3.0.8

Category

Feature

Description

References

New feature

MariaDB JDBC Connector

MariaDB Java Database Connectivity (JDBC) Connector is supported.

None

Specifications applicable to Cluster Edition

The storage-intensive specification S8 is added for AnalyticDB for MySQL Cluster Edition clusters. S8 is ideal for scenarios that do not require high concurrency and performance.

None

Flexible purchase of clusters

Node groups can be purchased and scaled out in pairs. This allows you to purchase clusters on demand and reduces costs.

None

Availability in Alibaba Finance Cloud

AnalyticDB for MySQL is available in the China East 1 Finance, China East 2 Finance, and China South 1 Finance regions of Alibaba Finance Cloud.

None

Availability on the international site (alibabacloud.com)

AnalyticDB for MySQL is available in the China (Hong Kong), Indonesia (Jakarta), and Malaysia (Kuala Lumpur) regions.

None

Optimized feature

Time types

The TIMESTAMP and DATETIME data types are compatible with the NO_ZERO_DATE mode of MySQL SQL_MODE.

None

December 2019

V3.0.7

Category

Feature

Description

References

New feature

Specification C24

The compute-intensive specification C24 is added for AnalyticDB for MySQL clusters. C24 is ideal for scenarios that require sophisticated computing capabilities.

None

Configuration upgrade

Cluster specifications can be upgraded. You can perform switchovers between two of the following specifications within seconds: C8, C4, and C24.

None

Monitoring and alerting

The monitoring and alerting feature is supported. You can use CloudMonitor to set thresholds for all metrics. An alert is triggered when a threshold is reached.

Configure an alert rule

Query termination

The query termination feature is supported. You can view and terminate running queries in real time in the AnalyticDB for MySQL console.

None

Data synchronization from PolarDB-X to AnalyticDB for MySQL

Data Transmission Service (DTS) can be used to synchronize data from PolarDB-X to AnalyticDB for MySQL in real time for data analytics.

None

Availability on the international site (alibabacloud.com)

AnalyticDB for MySQL is released for international use. This service is available in the Singapore and Japan (Tokyo) regions.

None

Optimized feature

View creation

Window functions can be used to create views.

None

Use scenarios of CTEs

CTEs can be used in the INSERT SELECT FROM clause.

INSERT SELECT FROM

September 2019

V3.0.6

Category

Feature

Description

References

New feature

Specification C4

The specification C4 is added to simplify the use of AnalyticDB for MySQL. We recommend that you use this specification in learning.

None

COLLECT_SET() function

The COLLECT_SET() function is supported.

None

Optimized feature

Creation and scaling time for clusters

The amount of time spent on creating and scaling clusters is shortened to reduce costs.

None

August 2019

V3.0.5

Category

Feature

Description

References

New feature

Default column value

The default value of a column can be set to the current time. Example: gmt_create datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP.

None

Oracle GoldenGate (OGG)

OGG is supported in AnalyticDB for MySQL to enhance data synchronization from Oracle to AnalyticDB for MySQL.

None

Disk resizing

Flexible disk resizing is supported. This allows you to resize disks on demand and reduces costs.

None

Availability in Alibaba Finance Cloud

AnalyticDB for MySQL is available in Alibaba Finance Cloud.

None

Virtual e-commerce logistics platforms and CloudTmall

AnalyticDB for MySQL is available in virtual e-commerce logistics platforms and CloudTmall.

None

Optimized feature

Error message returned for modifying non-auto-increment keys

The error message that is returned when you change non-auto-increment keys to auto-increment keys is optimized to facilitate your understanding.

None

July 2019

V3.0.4

Category

Feature

Description

References

New feature

Backup and restoration

The backup and restoration feature is supported. You can restore data from backup sets to a point in time to maximize data restorability.

None

LOAD DATA

The LOAD DATA LOCAL INFILE statement is supported.

LOAD DATA LOCAL INFILE

Flexible purchase of services

Node groups can be purchased in pairs. For example, you can set Node Groups to 2, 4, 6, or 8 on the AnalyticDB for MySQL buy page.

None

Data types and important functions

New data types and specific important functions are supported.

None

Optimized feature

Compatibility

AnalyticDB for MySQL is fully compatible with Navicat, FineReport, and FineBI, and its compatibility with Sequel Pro is greatly improved.

None