×
Community Blog Flink CDC Series – Part 2: Flink MongoDB CDC Production Practices in XTransfer

Flink CDC Series – Part 2: Flink MongoDB CDC Production Practices in XTransfer

Part 2 of this 5-part series explains how to realize Flink MongoDB CDC Connector through MongoDB Change Streams features based on Flink CDC.

By Sun Jiabao

Sun Jiabao shared how to realize Flink MongoDB CDC Connector through MongoDB Change Streams features based on Flink CDC.

The main contents include:

  1. Flink CDC
  2. MongoDB Replication Mechanism
  3. Flink MongoDB CDC

Introduction

XTransfer focuses on providing cross-border financial and risk control services for small and medium-sized cross-border B2B e-commerce enterprises. XTransfer builds a global financial management platform and provides comprehensive solutions for opening global and local collection accounts, foreign exchange, overseas foreign exchange control country declaration, and other cross-border financial services through the establishment of data-based, automated, Internet-based, and intelligent risk control infrastructure.

In the early stage of business development, we chose the traditional offline data warehouse architecture and adopted the data integration mode of the full collection, batch processing, and overwrite. The data timeliness was poor. With the development of business, offline data warehouses cannot meet the requirements for data timeliness. We decided to evolve from offline to real-time data warehouses. The key point of building a real-time data warehouse is to change the choice of data collection tools and real-time computing engines.

After a series of investigations, we noticed the Flink CDC project in February 2021. It embeds Debezium, which enables Flink to capture change data, reduces the development threshold, and simplifies the deployment complexity. In addition, Flink's powerful real-time computing capabilities and rich external system access capabilities have become a key tool for us to build real-time data warehouses.

We have also made extensive use of MongoDB in production. As a result, we have implemented Flink MongoDB CDC Connector through MongoDB Change Streams features based on Flink CDC and contributed to the Flink CDC community. Currently, MongoDB has been released in version 2.1. We are honored to share the implementation details and production practices here.

1. Flink CDC

Dynamic tables are the core concepts of Flink's table API and SQL that support streaming data. Streams and tables are a duality. You can convert a table into a changelog stream or replay the Change Stream into a table.

There are two forms of change stream: Append Mode and Update Mode. Append Mode will only be added, not changed or deleted, such as event streams. Update Mode may be added, changed, and deleted, such as database operation logs. Before Flink 1.11, only dynamic tables can be defined on Append Mode.

Flink 1.11 introduces the new TableSource and TableSink in FLIP-95 to implement support for the Update Mode changelog. In FLIP-105, support for Debezium and Canal CDC format was introduced. You can define dynamic tables from changelogs by implementing ScanTableSource, receiving external system changelogs (such as database changelogs), interpreting them as identifiable changelogs of Flink, and forwarding them down.

1

In Flink, changelog records are represented by RowData. RowData includes four types: +I (INSERT), -U (UPDATE_BEFORE), +U (UPDATE_AFTER), and -D (DELETE). According to the different types of changelog records, there are three types of changelog modes.

  • INSERT_ONLY: Only contains +I, which is suitable for batch processing and event streams
  • ALL: All RowKind that contain +I, -U, +U, and -D, such as MySQL binlog
  • UPSERT: Contains only three types of RowKind: +I, +U, and -D and does not contain -U. However, you must press the idempotent update of a unique key, such as MongoDB Change Streams.

2. MongoDB Replication Mechanism

As mentioned in the previous section, the key point in implementing Flink CDC MongoDB is how to convert the operation logs of MongoDB into a changelog supported by Flink. If you want to solve this problem, you need to understand the cluster deployment and replication mechanism of MongoDB.

2.1 Replica Sets and Split Clusters

A replica set is a highly available deployment mode provided by MongoDB. The replica set is replicated by oplog (operation log) to synchronize data between replica set members.

A split cluster is a deployment mode in which MongoDB supports large-scale datasets and high-throughput operations. Each split consists of one replica set.

2

2.2 Replica Set Oplog

Operation log is a special capped collection (fixed capacity collection) in MongoDB. It is used to record operation logs of data and is used for synchronization between replica set members. The following figure shows the data structure of the oplog record:

{
    "ts" : Timestamp(1640190995, 3),
    "t" : NumberLong(434),
    "h" : NumberLong(3953156019015894279),
    "v" : 2,
    "op" : "u",
    "ns" : "db.firm",
    "ui" : UUID("19c72da0-2fa0-40a4-b000-83e038cd2c01"),
    "o2" : {
        "_id" : ObjectId("61c35441418152715fc3fcbc")
    },
    "wall" : ISODate("2021-12-22T16:36:35.165Z"),
    "o" : {
        "$v" : 1,
        "$set" : {
            "address" : "Shanghai China"
        }
    }
}

3

As shown in the example, the update records of MongoDB oplog do not contain the information before the update or the complete record after the update. Therefore, the update records cannot be converted to the ALL-type changelog supported by Flink or the UPSERT-type changelog.

In addition, data writing may occur in different split replica sets in a split cluster. Therefore, the oplog of each split only records data changes that occur on that split. Thus, if you want to obtain complete data changes, you need to sort and merge the oplogs of each split according to the operation time, which increases the difficulty and risk of capturing change records.

Before the 1.7 version of Debezium MongoDB Connector, change data was captured by traversing oplogs. For the preceding reasons, we did not use Debezium MongoDB Connector and chose the official MongoDB Change Streams-based MongoDB Kafka Connector.

2.3 Change Streams

Change Streams is a new feature released in MongoDB 3.6. It shields the complexity of traversing oplogs and enables users to subscribe to data changes at the cluster, database, and collection levels through simple API operations.

2.3.1 Use Conditions

  • WiredTiger Storage Engine
  • Replica set (in a test environment, you can use a single-node replica set) or a split cluster deployment
  • Replica Set Protocol Version: pv1 (default)
  • Majority Read Concern allowed before version 4.0: replication.enableMajorityReadConcern = true (allowed by default)
  • MongoDB users have find and changeStream permissions

2.3.2 Change Events

Change Events is a change record returned by Change Streams. The following is its data structure:

{
   _id : { <BSON Object> },
   "operationType" : "<operation>",
   "fullDocument" : { <document> },
   "ns" : {
      "db" : "<database>",
      "coll" : "<collection>"
   },
   "to" : {
      "db" : "<database>",
      "coll" : "<collection>"
   },
   "documentKey" : { "_id" : <value> },
   "updateDescription" : {
      "updatedFields" : { <document> },
      "removedFields" : [ "<field>", ... ],
      "truncatedArrays" : [
         { "field" : <field>, "newSize" : <integer> },
         ...
      ]
   },
   "clusterTime" : <Timestamp>,
   "txnNumber" : <NumberLong>,
   "lsid" : {
      "id" : <UUID>,
      "uid" : <BinData>
   }
}

4

2.3.3 Update Lookup

Since the update operation of the oplog only contains the fields after the change, the complete document after the change cannot be obtained from the oplog. However, when converting to changelog in UPSERT mode, the UPDATE_AFTER RowData must have a complete row record. Change Streams can return the latest status of the document when getting the change record by setting fullDocument = updateLookup. In addition, each record of a Change Event contains documentKey (_id and shard key), which identifies the primary key information of the change record, which means the condition for an idempotent update is met. Therefore, you can use the Update Lookup feature to convert the change records of MongoDB into the UPSERT changelog of Flink.

3. Flink MongoDB CDC

In terms of a specific implementation, we have integrated the MongoDB Kafka Connector implemented by MongoDB based on Change Streams. Drivers can easily run MongoDB Kafka Connector in Flink by using Debezium EmbeddedEngine. MongoDB CDC TableSource is implemented by converting Change Stream into Flink UPSERT changelog. With the resume mechanism of Change Streams, the function of restoring from checkpoint and savepoint is implemented.

As described in FLIP-149, some operations (such as aggregation) are difficult to handle correctly when -U message is missing. For a changelog of the UPSERT type, Flink Planner introduces an additional compute node (Changelog Normalize) to standardize it to a changelog of the ALL type.

5

Capabilities

  • Supports Exactly-Once semantics
  • Supports full and incremental subscriptions
  • Supports Snapshot Data Filtering
  • Support recovery from checkpoints and savepoints
  • Support metadata extraction

4. Production Practice

4.1 Use RocksDB State Backend

Changelog Normalize adds additional status overhead to the pre-image value of -U. We recommend using RocksDB State Backend in the production environment.

4.2 Appropriate Oplog Capacity and Expiration Time

MongoDB oplog.rs is a special collection of capacities. When the oplog.rs capacity reaches the maximum, historical data is discarded. Change Streams uses the resume token for recovery. If the oplog capacity is too small, the oplog record corresponding to the resume token may no longer exist. As a result, the recovery fails.

When the specified oplog capacity is not displayed, the default oplog capacity of the WiredTiger engine is 5% of the disk size, the lower limit is 990 MB, and the upper limit is 50 GB. After MongoDB 4.4, you can set the minimum retention period for oplogs. The oplog records are only reclaimed when the oplog records are full and exceed the minimum retention period.

You can use the replSetResizeOplog command to reset the oplog capacity and minimum retention time. In the production environment, we recommend not setting the oplog capacity to less than 20 GB and not setting the oplog retention period to less than seven days.

db.adminCommand(
  {
    replSetResizeOplog: 1, // fixed value 1
    size: 20480,           // The unit is MB, which ranges from 990MB to 1PB
    minRetentionHours: 168 // Optional. Unit: hours
  }
)

4.3 Open Heartbeat Events for Slow-Changing Tables

The Flink MongoDB CDC periodically writes the resume token to the checkpoint to restore the Change Stream. MongoDB change events or heartbeat events can trigger the update of the resume token. If the subscribed collection changes slowly, the resume token corresponding to the last change record may expire and fails to be recovered from the checkpoint. Therefore, we recommend enabling the heartbeat event (set heartbeat.interval.ms > 0) to keep the resume token updated.

WITH (
    'connector' = 'mongodb-cdc',
    'heartbeat.interval.ms' = '60000'
)

4.4 Custom MongoDB Connection Parameters

If the default connection fails to meet the requirements, you can use the connection.options configuration items to pass connection parameters supported by MongoDB.

WITH (
   'connector' = 'mongodb-cdc',
   'connection.options' = 'authSource=authDB&maxPoolSize=3'
)

4.5 Change Stream Parameter Tuning

You can realize fine-grained pull of configuration change events in Flink DDL through poll.await.time.ms and poll.max.batch.size.

  • poll.await.time.ms

When the change event is pulled, the default value is 1500 ms. You can reduce the pull interval to improve the processing effectiveness for collections with frequent changes. You can increase the pull interval to reduce the pressure on the database for collections with slow changes.

  • poll.max.batch.size

The maximum number of pull change events in each batch. The default value is 1000. Upgrading the parameters speeds up pulling change events from the Cursor but increases the memory overhead.

4.6 Subscriptions to Entire Libraries and Cluster Changes

  • database = "db",collection = "" – You can subscribe to the changes of the entire libraries.
  • database = "",collection = "" – You can subscribe to the changes of the entire cluster.

The DataStream API can use pipelines to filter db and collections that need subscriptions. Currently, the filtering of Snapshot collections is not supported.

MongoDBSource.<String>builder()
    .hosts("127.0.0.1:27017")
    .database("")
    .collection("")
    .pipeline("[{'$match': {'ns.db': {'$regex': '/^(sandbox|firewall)$/'}}}]")
    .deserializer(new JsonDebeziumDeserializationSchema())
    .build();

4.7 Permission Control

MongoDB supports fine-grained control over users, roles, and permissions. Users that enable Change Stream must have the find and changeStream permissions.

  • Single Collection
{ resource: { db: <dbname>, collection: <collection> }, actions: [ "find", "changeStream" ] }
  • Single Library
{ resource: { db: <dbname>, collection: "" }, actions: [ "find", "changeStream" ] }
  • Cluster
{ resource: { db: "", collection: "" }, actions: [ "find", "changeStream" ] }

In the production environment, we recommend creating Flink users and roles and granting fine-grained authorization to the role. Note: MongoDB can create users and roles in any database. If the user is not created in admin, you need to specify authSource = in the connection parameters.

use admin;
// Create a user
db.createUser(
 {
   user: "flink",
   pwd: "flinkpw",
   roles: []
 }
);

// Create a role
db.createRole(
   {
     role: "flink_role", 
     privileges: [
       { resource: { db: "inventory", collection: "products" }, actions: [ "find", "changeStream" ] }
     ],
     roles: []
   }
);

// Grant a role to a user
db.grantRolesToUser(
    "flink",
    [
      // Note:db refers to the db when the role is created. Roles created under admin can have access permissions on different databases
      { role: "flink_role", db: "admin" }
    ]
);

// Add permissions to the role
db.grantPrivilegesToRole(
    "flink_role",
     [
       { resource: { db: "inventory", collection: "orders" }, actions: [ "find", "changeStream" ] }
     ]
);

In the development and test environment, you can grant the read and readAnyDatabase built-in roles to Flink users to enable Change Stream for any collection.

use admin;
db.createUser({
  user: "flink",
  pwd: "flinkpw",
  roles: [
    { role: "read", db: "admin" },
    { role: "readAnyDatabase", db: "admin" }
  ]
});

5. Future Planning

  • Supports Incremental Snapshot

Currently, MongoDB CDC Connector does not support incremental snapshots, and the advantages of Flink parallel computing cannot be fully utilized for tables with large data volumes. The incremental Snapshot feature of MongoDB will be implemented in the future to support the checkpoint and concurrency settings in the Snapshot phase.

  • Support for changing subscription at a specified time

Currently, the MongoDB CDC Connector only supports the subscription of Change Stream from the current time. Later, the Change Stream subscription at a specified time will be provided.

  • Supports filtering of libraries and collections

Currently, MongoDB CDC Connector supports change subscription and filtering for clusters and entire libraries but does not support filtering for snapshot collections. This feature will be improved in the future.

References

[1] Duality of Streams and Tables

[2] FLIP-95: New TableSource and TableSink interfaces

[3] FLIP-105: Support to Interpret Changelog in Flink SQL (Introducing Debezium and Canal Format)

[4] FLIP-149: Introduce the upsert-kafka Connector

[5] Apache Flink 1.11.0 Release Announcement

[6] Introduction to SQL in Flink 1.11

[7] MongoDB Manual

[8] MongoDB Connection String Options

[9] MongoDB Kafka Connector

0 1 1
Share on

Apache Flink Community

150 posts | 43 followers

You may also like

Comments

Apache Flink Community

150 posts | 43 followers

Related Products