All Products
Search
Document Center

Data Transmission Service:Synchronize data from an ApsaraDB RDS for MySQL instance to a self-managed Kafka cluster

最終更新日:Sep 06, 2024

Kafka is a distributed message queue service that features high throughput and high scalability. Kafka is widely used for big data analytics such as log collection, monitoring data aggregation, streaming processing, and online and offline analysis. Kafka is an essential service for the big data ecosystem. This topic describes how to synchronize data from an RDS MySQL instance to a self-managed Kafka cluster by using Data Transmission Service (DTS). The data synchronization feature allows you to extend message processing capabilities.

Prerequisites

Usage notes

  • DTS uses read and write resources of the source and destination RDS instances during initial full data synchronization. This may increase the loads of the RDS instances. If the instance performance is unfavorable, the specification is low, or the data volume is large, database services may become unavailable. For example, DTS occupies a large amount of read and write resources in the following cases: a large number of slow SQL queries are performed on the source RDS instance, the tables have no primary keys, or a deadlock occurs in the destination RDS instance. Before data synchronization, evaluate the impact of data synchronization on the performance of the source and destination RDS instances. We recommend that you synchronize data during off-peak hours. For example, you can synchronize data when the CPU utilization of the source and destination RDS instances is less than 30%.

  • The source database must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

Billing

Synchronization typeTask configuration fee
Schema synchronization and full data synchronizationFree of charge.
Incremental data synchronizationCharged. For more information, see Billing overview.

Limits

  • Only tables can be selected as the objects to synchronize.

  • DTS does not synchronize the data in a renamed table to the destination Kafka cluster. This applies if the new table name is not included in the objects to be synchronized. If you want to synchronize the data in a renamed table to the destination Kafka cluster, you must reselect the objects to be synchronized. For more information, see Add an object to a data synchronization task.

Supported synchronization topologies

  • One-way one-to-one synchronization

  • One-way one-to-many synchronization

  • One-way many-to-one synchronization

  • One-way cascade synchronization

Procedure

  1. Purchase a data synchronization instance. For more information, see Purchase a DTS instance.

    Note

    On the buy page, set the Source Instance parameter to MySQL, the Destination Instance parameter to Kafka, and the Synchronization Topology parameter to One-Way Synchronization.

  2. Log on to the Data Transmission Service (DTS) console.

  3. In the left-side navigation pane, click Data Synchronization.

  4. In the upper part of the Data Synchronization Tasks page, select the region in which the destination instance resides.

  5. Find the data synchronization task and click Configure Task in the Actions column.

  6. Configure the source instance and destination cluster.

    Configure the source instance and destination cluster

    Section

    Parameter

    Description

    N/A

    Synchronization Task Name

    The task name that DTS automatically generates. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to use a unique task name.

    Source Instance Details

    Instance Type

    The type of the source instance. Select RDS Instance.

    Instance Region

    The source region that you selected on the buy page. The value of this parameter cannot be changed.

    Instance ID

    The ID of the source ApsaraDB RDS instance.

    Database Account

    The account that is used to connect to the source database. The account must have the SELECT permission on the required objects and the REPLICATION CLIENT, REPLICATION SLAVE, and SHOW VIEW permissions.

    Database Password

    The password of the source database account.

    Encryption

    Specifies whether to encrypt the connection to the source instance. Select Non-encrypted or SSL-encrypted based on your business and security requirements. If you select SSL-encrypted, you must enable SSL encryption for the ApsaraDB RDS instance before you configure the data synchronization task. For more information, see Use a cloud certificate to enable SSL encryption.

    Important

    The Encryption parameter is available only within regions in the Chinese mainland and the China (Hong Kong) region.

    Destination Instance Details

    Instance Type

    The deployment of the Kafka cluster. In this example, User-Created Database in ECS Instance is selected.

    Note

    If you set the Instance Type parameter to other values, you must deploy the network environment for the Kafka cluster. For more information, see Preparation overview.

    Instance Region

    The destination region that you selected on the buy page. The value of this parameter cannot be changed.

    ECS Instance ID

    The ID of the Elastic Compute Service (ECS) instance on which the Kafka cluster is deployed.

    Note

    If Kafka is deployed in a cluster architecture, you need to only select the ID of the ECS instance in which one node of the cluster resides. DTS automatically obtains the topic information of all nodes in the Kafka cluster.

    Database Type

    The type of the destination database. Select Kafka.

    Port Number

    The service port number of the Kafka cluster. Default value: 9092.

    Database Account

    The username that is used to log on to the Kafka cluster. If no authentication is enabled for the Kafka cluster, you do not need to enter the username.

    Database Password

    The password of the Kafka cluster. If no authentication is enabled for the Kafka cluster, you do not need to enter the password.

    Kafka Version

    The version of the destination Kafka cluster.

    Encryption

    Specifies whether to encrypt the connection to the destination cluster. Select Non-encrypted or SCRAM-SHA-256 based on your business and security requirements.

    Topic

    The name of the topic to which data is synchronized. Click Get Topic List and select a topic name from the drop-down list.

    Topic That Stores DDL Information

    The topic used to store the DDL information. Select a topic from the drop-down list. If you do not specify this parameter, the DDL information is stored in the topic that is specified by the Topic parameter.

    Use Kafka Schema Registry

    Kafka Schema Registry provides a serving layer for your metadata. It provides a RESTful API to store and retrieve your Avro schemas.

    • No: does not use Kafka Schema Registry.

    • Yes: uses Kafka Schema Registry. In this case, you must enter the URL or IP address that is registered in Kafka Schema Registry for your Avro schemas.

  7. In the lower-right corner of the page, click Set Whitelist and Next.

    If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the self-managed database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the IP address whitelist of the database to allow DTS to access the database. For more information, see Add the CIDR blocks of DTS servers.

    Warning

    If the CIDR blocks of DTS servers are automatically or manually added to the whitelist of the database or instance, or to the ECS security group rules, security risks may arise. Therefore, before you use DTS to synchronize data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or ECS security group rules and forbidding unauthorized CIDR blocks, or connecting the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.

  8. Select the objects to be synchronized.

    Select the objects to be synchronized

    Parameter

    Description

    Data Format in Kafka

    The data that is synchronized to the Kafka cluster is stored in the Avro or Canal JSON format. For more information, see Data formats of a Kafka cluster.

    Policy for Shipping Data to Kafka Partitions

    The policy used to synchronize data to Kafka partitions. Select a policy based on your business requirements. For more information, see Specify the policy for synchronizing data to Kafka partitions.

    Objects to be synchronized

    Select one or more tables from the Available section and click the Rightwards arrow icon to add the tables to the Selected section.

    Note

    DTS maps the table names to the topic name that you select in Step 6. You can use the table name mapping feature to change the topics that are synchronized to the destination cluster. For more information, see Rename an object to be synchronized.

    Rename Databases and Tables

    You can use the object name mapping feature to rename the objects that are synchronized to the destination instance. For more information, see Object name mapping.

    Retry Time for Failed Connections

    By default, if DTS fails to connect to the source or destination database, DTS retries within the next 720 minutes (12 hours). You can specify the retry time based on your needs. If DTS reconnects to the source and destination databases within the specified time, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.

    Note

    When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time based on your business needs. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.

  9. In the lower-right corner of the page, click Next.

  10. Configure initial synchronization.

    Kafka: Configure initial synchronization

    Parameter

    Description

    Initial Synchronization

    Select both Initial Schema Synchronization and Initial Full Data Synchronization. DTS synchronizes the schemas and historical data of the required objects and then synchronizes incremental data.

    Filter options

    Ignore DDL in incremental synchronization phase is selected by default. In this case, DTS does not synchronize DDL operations that are performed on the source database during incremental data synchronization.

  11. In the lower-right corner of the page, click Precheck.

    Note
    • Before you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.

    • If the task fails to pass the precheck, you can click the 提示 icon next to each failed item to view details.

      • After you troubleshoot the issues based on the details, initiate a new precheck.

      • If you do not need to troubleshoot the issues, ignore the failed items and initiate a new precheck.

  12. Close the Precheck dialog box after the following message is displayed: Precheck Passed. Then, the data synchronization task starts.

    You can view the status of the data synchronization task on the Data Synchronization page. View the status of a data synchronization task