All Products
Search
Document Center

Realtime Compute for Apache Flink:June 21, 2023

Last Updated:Jun 05, 2024

This topic describes the release notes for Realtime Compute for Apache Flink and provides links to relevant references. The release notes provide the major updates and bug fixes in Realtime Compute for Apache Flink in the version that was released on June 21, 2023.

Important

A canary release is initiated for this version and will be complete within two weeks. If you cannot find new features in the Realtime Compute for Apache Flink console, the new version is still unavailable for your platform. If you want to upgrade your platform at the earliest opportunity, submit a ticket to apply for an upgrade. For more information about the upgrade plan, see the most recent announcement on the right side of the homepage of the Realtime Compute for Apache Flink console.

Overview

A new official version of Realtime Compute for Apache Flink was released on June 21, 2023. This version includes platform updates, engine updates, connector updates, performance optimization, and bug fixes.

The engine version Ververica Runtime (VVR) 6.0.7 is released, which is an enterprise-level Flink engine based on Apache Flink 1.15.4. This version of Realtime Compute for Apache Flink officially supports MaxCompute catalogs and Log Service catalogs. You can define and use data tables of the catalogs as permanent tables. Apache Paimon is in the invitational preview. This version includes an update of Apache Paimon 0.4.0. The update includes data ingestion into data lakes by using the Flink Change Data Capture (CDC) connector and schema evolution by executing the CREATE TABLE AS and CREATE DATABASE AS statements. In addition, the order of streaming data read and write operations can be better maintained, and the data consumption flexibility and read and write performance are improved. Data control features, such as snapshot cleanup and automatic partition deletion, are also enhanced. The Parquet file format is supported. VVR 4.0.18 is officially released. Multiple defects are fixed in this version. This version is the final version that is recommended for the update of VVR 4.X.

Multiple common features on the platform are optimized, such as audit logs, access to Hive clusters that support Kerberos authentication, and monitoring and alerting. The display of specific pages and user experience on these pages are optimized.

The canary release will be complete within two weeks on the entire network. After the canary release is complete, the platform capabilities are upgraded and you can view the new engine version in the Engine Version drop-down list of your draft. You can upgrade the engine that is used by your draft to the new version. We look forward to your feedback.

Features

Feature

Description

References

Execution of the CREATE TABLE AS statement or the CREATE DATABASE AS statement for real-time data ingestion into Apache Paimon by using Apache Paimon catalogs

Apache Paimon catalogs can be used to ingest data into Apache Paimon in real time.

CREATE TABLE AS statement

Enhancement of the MySQL connector for result tables and dimension tables

The capabilities of the MySQL connector that is used for result tables and dimension tables are enhanced. From this version, we recommend that you gradually migrate your deployments from the ApsaraDB RDS for MySQL connector to the MySQL connector.

MySQL connector

MySQL CDC source tables that do not have a primary key for incremental reading

A MySQL CDC source table that does not have a primary key can be used for incremental reading. This update allows more types of MySQL tables to be used as CDC source tables.

Expiration time and increment settings supported by the Tair connector for result tables

The Tair connector that is used for result tables allows you to specify the expiration time of data in a Tair result table and configure increment settings.

Tair connector

Access to a Hive cluster that supports Kerberos authentication

Data of a Flink JAR deployment or Flink Python deployment can be written to a Hive cluster that supports Kerberos authentication.

Audit logs

Realtime Compute for Apache Flink is connected to the ActionTrail service. This way, in the ActionTrail console, you can view the operation records of users in Realtime Compute for Apache Flink.

View resource operation events by using ActionTrail

Optimization of the display for information about intelligent deployment diagnostics

The Diagnosis tab is added to the Deployments page to display more diagnosis information. The dialog box about intelligent deployment diagnostics is no longer used. The health scores and deployment diagnosis help you learn the deployment status more easily.

Perform intelligent deployment diagnostics

Addition of the Modifier column

The Modifier column is added to the Deployments page to help you learn the deployment operation information.

None

Classification of engine versions

All engine versions in the Engine Version drop-down list of the Configurations pane are classified into four types: Recommend, Stable, Normal, and Deprecated. We recommend that you upgrade your engine to a stable version or a recommended version.

Member management API

The member management API is available for you to complete automated authorization.

None

Page-related experience optimization

  • The layout and filtering on the Deployments page can be customized.

  • Specific UI styles are optimized.

  • The layout of specific pages is optimized. The information sections such as the log section are expanded.

None

Enhanced alerting capabilities

  • The No Data Warning switch is added to help you identify data source exceptions at the earliest opportunity.

  • The Alarm Noise Reduction switch is added to help you reduce the number of repeated alerts and invalid alerts. This reduces alert noise and improves the usability and accuracy of alerts.

Configure alert rules

Upgrade of the Hudi connector

The Hudi connector is upgraded to Apache Hudi 0.13.1.

None

Upgrade of the Apache Paimon connector

The Apache Paimon connector is upgraded to Apache Paimon 0.4.0.

Manage Apache Paimon catalogs

Exclusive Tunnel resource groups supported by the MaxCompute connector

Exclusive Tunnel resource groups of MaxCompute are supported by the MaxCompute connector to make data transfer more stable and efficient.

MaxCompute connector

Performance optimization of the DataHub connector for source tables

In specific scenarios, the performance of the DataHub connector that is used for source tables is improved by about 290%.

None

Writing of time series data by using the Tablestore connector

The time series model of Tablestore is designed based on the characteristics of time series data. You can use the Tablestore connector to write time series data to Tablestore.

None

Support for DLF as the metadata management center of Hive catalogs in Hive 3.X

In Hive 3.X, Data Lake Formation (DLF) can be used as the metadata management center of Hive catalogs.

Manage Hive catalogs

Log Service catalogs

Log Service catalogs are supported. After you register metadata by using a Log Service catalog, you do not need to frequently use DDL statements to create a Log Service source table when you create an SQL deployment.

Manage Log Service catalogs

MaxCompute catalogs

MaxCompute catalogs are supported. After you register metadata by using a MaxCompute catalog, you do not need to frequently execute DDL statements to create a MaxCompute source table when you create an SQL deployment.

Manage MaxCompute catalogs

Fixed issues

  • The following issue is fixed: Memory overflow occurs when the MySQL connector is used with the CREATE TABLE AS statement or the CREATE DATABASE AS statement to consume data of a MySQL CDC source table.

  • The issue that a null pointer exception occurs when the Hologres connector is used for a dimension table is fixed.

  • The issue that memory overflow occurs when the Hologres connector is used for a source table is fixed.