All Products
Search
Document Center

Realtime Compute for Apache Flink:January 9, 2026

Last Updated:Mar 26, 2026

This topic describes the major feature changes and key bug fixes for Realtime Compute for Apache Flink released on January 9, 2026.

Important

The version upgrade is gradually released to users. For more information, see the latest announcement in the Realtime Compute for Apache Flink console. You can use the new features in this version only after the upgrade is complete for your account. To apply for an expedited upgrade, submit a ticket.

Overview

On January 9, 2026, Realtime Compute for Apache Flink released Ververica Runtime (VVR) 11.5.0. This release introduces native Bitmap type support for real-time deduplication, expands Flink Change Data Capture (CDC) YAML capabilities to handle complex routing and dirty data scenarios, enhances the performance of Paimon in streaming data lakehouses, and strengthens connector support for PolarDB-X, OceanBase, and MongoDB to improve the efficiency of large-scale data integration. It also incorporates key bug fixes from Apache Flink 1.20.2 and 1.20.3.

Engine

This release strengthens core engine capabilities by providing significant upgrades to data type support, job fault tolerance and recovery, and synchronization with Apache Flink capabilities.

SQL enhancements

Bitmap type support — The new native Bitmap type and its initial set of functions (bitmap construction and cardinality statistics) enable efficient, real-time, precise deduplication—for example, unique visitor (UV) counting.

Data ingestion via YAML (Flink CDC)

  • Metadata reference — Directly reference source and sink tables registered in the Catalog within Flink CDC YAML scripts, without redefining table schemas inline.

  • Complex routing logic — Configure table names in Route using standard regular expressions to support sharded table merging scenarios.

  • Dirty data handling — JSON format parsing now supports a dirty data handling mechanism, so malformed records no longer cause job failures.

  • Automatic precision expansion — StarRocks now partially supports automatic precision expansion for the Decimal type, reducing manual schema adjustments for high-precision data.

  • Empty schema fault tolerance — The Kafka source can now create tables with an empty schema when the source has no data yet, preventing job startup failures in early-stage pipelines.

  • Paimon source support — Apache Paimon is now officially supported as a Flink CDC source, enabling change data capture directly from Paimon tables.

Connectors

  • MySQL — Three enhancements in this release:

    • Throttling: A data throttling feature prevents excessive load on the source database during high-throughput ingestion.

    • Metadata extension: The file and pos binary logging metadata fields are now supported. Use these fields to determine the modification order of the same row across events.

    • Performance optimization: In snapshot-only mode, the connector no longer executes binary logging-related commands, reducing overhead when binary logging is not needed.

  • PolarDB-X CDC — Public preview. Two enhancements:

    • Subscribe to binary logging streams with multiple concurrent connections to significantly increase ingestion throughput.

    • Subscribe to binary logging at the table level for finer-grained CDC control.

  • OceanBase — Public preview. The new bypass import write feature significantly improves write performance for large data batches.

  • MongoDB — The sink table now supports Partial Update, so only changed fields are written rather than full documents.

  • Elasticsearch — Two enhancements:

    • Explicitly configure the doc_as_upsert parameter to control upsert behavior.

    • Configure a connection timeout parameter to handle unstable network environments.

  • Apache Kafka — Redundant consumer groups created by the Kafka connector are now automatically deleted, reducing clutter in your Kafka cluster.

Data lakehouse integration (Iceberg)

Iceberg Connector — Configurable Hadoop-related parameters give you greater flexibility when connecting to complex Hadoop environments.

Bug fixes

  • Incorporated key fixes from Apache Flink 1.20.2 and 1.20.3.

  • Fixed an issue where data fields were incorrectly read as null when Flink consumed data from a Hologres binary logging source table.

  • Fixed a data loss issue that occurred when Flink read data from Kafka and wrote it to OSS after transactions were enabled in the Kafka connector.

  • Fixed an issue where a disconnected PolarDB-X connection caused a sharp increase in latency and threw an EOFException error.

  • Optimized CDC execution logic by removing unnecessary instructions in specific modes.