This topic describes how to migrate data from a PolarDB for MySQL cluster to a DataHub project by using Data Transmission Service (DTS).
Prerequisites
The destination DataHub project is created. For more information, see Get started with DataHub.
DataHub is activated and a project is created to receive the data to be migrated. For more information, see Get started with DataHub and Manage projects.
The available storage space of the destination DataHub project is larger than the total size of the data in the source PolarDB for MySQL cluster.
Limits
DTS does not migrate foreign keys in the source database to the destination database. Therefore, the cascade and delete operations of the source database are not migrated to the destination database.
Category | Description |
Limits on the source database |
|
Other limits |
|
Usage notes | DTS executes the CREATE DATABASE IF NOT EXISTS `test` statement in the source database as scheduled to move forward the binary log file position. |
Billing
Migration type | Instance configuration fee | Internet traffic fee |
Schema migration and full data migration | Free of charge. | Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Billing overview. |
Incremental data migration | Charged. For more information, see Billing overview. |
Migration types
Schema migration
DTS migrates the schemas of the selected objects from the source database to the destination database.
Incremental data migration
After full data migration is complete, DTS migrates incremental data from the source database to the destination database. Incremental data migration allows data to be migrated smoothly without interrupting the services of self-managed applications during data migration.
SQL operations that can be incrementally migrated
Operation type | SQL statement |
DML | INSERT, UPDATE, and DELETE |
DDL | ALTER TABLE and TRUNCATE TABLE |
Permissions required for database accounts
Database | Required permission | References |
PolarDB for MySQL cluster | Read permissions on the objects to be migrated |
Procedure
Go to the Data Migration Tasks page.
Log on to the Data Management (DMS) console.
In the top navigation bar, move the pointer over DTS.
Choose .
NoteThe actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode and Customize the layout and style of the DMS console.
You can also go to the Data Migration page of the new DTS console.
From the drop-down list on the right side of Data Migration Tasks, select the region in which your data migration instance resides.
NoteIf you use the new DTS console, you must select the region in which the data migration instance resides in the upper-left corner.
Click Create Task. On the Create Data Migration Task page, configure the source and destination databases. The following table describes the parameters.
Section
Parameter
Description
N/A
Task Name
The name of the task. DTS automatically generates a task name. We recommend that you specify an informative name to identify the task. You do not need to specify a unique task name.
Source Database
Select an existing DMS database instance
The database instance that you want to use. You can choose whether to use an existing instance based on your business requirements.
If you select an existing instance, DTS automatically populates the parameters for the database.
If you do not select an existing instance, you must configure the following database information.
Database Type
The type of the source database. Select PolarDB for MySQL.
Access Method
The access method of the source database. Select Alibaba Cloud Instance.
Instance Region
The region in which the source PolarDB for MySQL instance resides.
Replicate Data Across Alibaba Cloud Accounts
Specifies whether data is migrated across Alibaba Cloud accounts. In this example, No is selected.
PolarDB Cluster ID
The ID of the source PolarDB for MySQL cluster.
Database Account
The database account of the source PolarDB for MySQL cluster. For information about the permissions that are required for the account, see the Permissions required for database accounts section of this topic.
Database Password
The password that is used to access the database instance.
Destination Database
Select an existing DMS database instance
The database instance that you want to use. You can choose whether to use an existing instance based on your business requirements.
If you select an existing instance, DTS automatically populates the parameters for the database.
If you do not select an existing instance, you must configure the following database information.
Database Type
The type of the destination database. Select DataHub.
Access Method
The access method of the destination project. Select Alibaba Cloud Instance.
Instance Region
The region in which the destination DataHub project resides.
Project
The ID of the destination DataHub project.
In the lower part of the page, click Test Connectivity and Proceed.
If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the self-managed database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the IP address whitelist of the database to allow DTS to access the database. For more information, see the CIDR blocks of DTS servers section of the Add the CIDR blocks of DTS servers topic.
WarningIf the public CIDR blocks of DTS servers are automatically or manually added to the whitelist of a database instance or to the security group rules of an ECS instance, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or security group rules and forbidding unauthorized CIDR blocks, or connecting the database instance to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.
Configure the objects to be migrated and advanced settings.
Parameter
Description
Migration Types
If you want to migrate data from a PolarDB for MySQL cluster to a DataHub project, you can only select Schema Migration or Incremental Data Migration as the migration type based on your business requirements.
NoteIf Incremental Data Migration is not selected, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.
Processing Mode of Conflicting Tables
Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.
NoteYou can use the object name mapping feature to rename the tables that are migrated to the destination database. You can use this feature if the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed. For more information, see Map object names.
Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
WarningIf you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks.
If the source and destination databases have the same schemas, and a data record has the same primary key value as an existing data record in the destination database:
During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained.
During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.
If the source and destination databases have different schemas, data may fail to be initialized. In this case, only some columns are migrated or the data migration task fails.
Naming Rules of Additional Columns
After DTS migrates data to DataHub, DTS adds additional columns to the destination topic. If the names of additional columns are the same as the names of existing columns in the destination topic, data migration fails. Select New Rule or Previous Rule based on your business requirements.
WarningBefore you specify this parameter, check whether additional columns and existing columns in the destination table have name conflicts. For more information, see Naming rules for additional columns.
Capitalization of Object Names in Destination Instance
The capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to ensure that the capitalization of object names is consistent with that in the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.
Source Objects
Select one or more objects from the Source Objects section. Click the icon and add the objects to the Selected Objects section.
NoteYou can select tables or databases as the objects to be migrated. If you select tables as the objects to be migrated, DTS does not migrate other objects such as views, triggers, and stored procedures to the destination database.
Selected Objects
- To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
- To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
NoteIf you use the object name mapping feature to rename an object, other objects that are dependent on the object may fail to be migrated.
To specify WHERE conditions to filter data, right-click an object in the Selected Objects section. In the dialog box that appears, specify the conditions. For more information, see Specify filter conditions.
To select the SQL operations performed on a specific database or table, right-click an object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to migrate. For more information about the SQL operations that can be migrated, see the SQL operations that can be incrementally migrated section of this topic.
Click Next: Advanced Settings.
Parameter
Description
Monitoring and Alerting
Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts receive notifications. Valid values:
No: does not configure alerting.
Yes: configures alerting. In this case, you must also configure the alert threshold and alert notification settings. For more information, see the Configure monitoring and alerting when you create a DTS task section of the Configure monitoring and alerting topic.
Configure ETL
Specifies whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:
Yes: configures the ETL feature. You can enter data processing statements in the code editor. For more information, see Configure ETL in a data migration or data synchronization task.
No: does not configure the ETL feature.
In the lower part of the page, click Next: Save Task Settings and Precheck.
You can move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters to view the parameters to be specified when you call the relevant API operation to configure the DTS task.
NoteBefore you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.
If the task fails to pass the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.
If an alert is triggered for an item during the precheck:
If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.
If the alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.
Wait until Success Rate becomes 100%. Then, click Next: Purchase Instance.
On the Purchase Instance page, configure the Instance Class parameter for the data migration instance. The following table describes the parameters.
Section
Parameter
Description
New Instance Class
Resource Group
The resource group to which the data migration instance belongs. Default value: default resource group. For more information, see What is Resource Management?
Instance Class
DTS provides instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Instance classes of data migration instances.
Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.
Click Buy and Start. In the message that appears, click OK.
You can view the progress of the task on the Data Migration page.