AnalyticDB for MySQL is a real-time online analytical processing (OLAP) service that is developed by Alibaba Cloud for online data analysis with high concurrency. AnalyticDB for MySQL can analyze petabytes of data from multiple dimensions within milliseconds to provide data-driven insights into your business. This topic describes how to migrate data from an RDS MySQL instance to an AnalyticDB for MySQL cluster by using Data Transmission Service (DTS). After you synchronize data, you can use AnalyticDB for MySQL to build internal business intelligence (BI) systems, interactive query systems, and real-time report systems.
Prerequisites
The tables of RDS MySQL to be synchronized from the source database contain primary keys.
An AnalyticDB for MySQL cluster is created. For more information, see Create a cluster.
The destination AnalyticDB for MySQL cluster has sufficient storage space.
Precautions
DTS uses read and write resources of the source and destination RDS instances during initial full data synchronization. This may increase the loads of the RDS instances. If the instance performance is unfavorable, the specification is low, or the data volume is large, database services may become unavailable. For example, DTS occupies a large amount of read and write resources in the following cases: a large number of slow SQL queries are performed on the source RDS instance, the tables have no primary keys, or a deadlock occurs in the destination RDS instance. Before data synchronization, evaluate the impact of data synchronization on the performance of the source and destination RDS instances. We recommend that you synchronize data during off-peak hours. For example, you can synchronize data when the CPU utilization of the source and destination RDS instances is less than 30%.
We recommend that you do not use gh-ost or pt-online-schema-change to perform DDL operations on the required objects during data synchronization. Otherwise, data may fail to be synchronized.
Due to the limits of AnalyticDB for MySQL, if the disk space usage of the nodes in an AnalyticDB for MySQL cluster reaches 80%, the cluster is locked. We recommend that you estimate the required disk space based on the objects to be synchronized. You must make sure that the destination cluster has sufficient storage space.
Prefix indexes cannot be synchronized. If the source database contains prefix indexes, data may fail to be synchronized.
Billing
Synchronization type | Task configuration fee |
Schema synchronization and full data synchronization | Free of charge. |
Incremental data synchronization | Charged. For more information, see Billing overview. |
SQL operations that can be synchronized
DDL operations: CREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, ADD COLUMN, DROP COLUMN, and MODIFY COLUMN
DML operations: INSERT, UPDATE, and DELETE
If the data type of a field in the source table is changed during data synchronization, an error message is reported and the data synchronization task is interrupted. For more information about how to handle this error, see the "Troubleshoot the synchronization failure that occurs due to field type changes" section of this topic.
Permissions required for database accounts
Database | Required permission |
ApsaraDB RDS for MySQL instance | The SELECT permission on the objects to be synchronized and the REPLICATION CLIENT, REPLICATION SLAVE, and SHOW VIEW permissions |
AnalyticDB for MySQL | Read and write permissions on the required objects |
Data type mappings
The data types of ApsaraDB RDS for MySQL and AnalyticDB for MySQL do not have one-to-one correspondence. During initial schema synchronization, DTS converts the data types of the source database into those of the destination database. For more information, see Data type mappings for schema synchronization.
Procedure
Purchase a data synchronization instance. For more information, see Purchase a DTS instance.
NoteOn the buy page, set Source Instance to MySQL, Destination Instance to AnalyticDB MySQL, and Synchronization Topology to One-way Synchronization.
Log on to the DTS console.
In the left-side navigation pane, click Data Synchronization.
In the upper part of the Data Synchronization Tasks page, select the region in which the data synchronization task is created.
Find the data synchronization task and click Configure Task in the Actions column.
Configure the source instance and the destination cluster.
Section
Parameter
Description
N/A
Synchronization Task Name
The task name that DTS automatically generates. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to use a unique task name.
Source Instance Details
Instance Type
Select RDS Instance.
Instance Region
The source region that you selected on the buy page. You cannot change the value of this parameter.
Instance ID
The ID of the source ApsaraDB RDS instance.
Database Account
The database account of the source ApsaraDB RDS instance. For information about the permissions that are required for the account, see Permissions required for database accounts.
NoteIf the database engine of the source RDS instance is MySQL 5.5 or MySQL 5.6, you do not need to configure the Database Account or Database Password parameter.
Database Password
The password of the database account.
Encryption
Specifies whether to encrypt the connection to the destination instance. Select Non-encrypted or SSL-encrypted. If you want to select SSL-encrypted, you must enable SSL encryption for the ApsaraDB RDS instance before you configure the data synchronization task. For more information, see Configure SSL encryption for an ApsaraDB RDS for MySQL instance.
ImportantThe Encryption parameter is available only within regions in the Chinese mainland and the China (Hong Kong) region.
Destination Instance Details
Instance Type
This parameter is set to AnalyticDB and cannot be changed.
Instance Region
The destination region that you selected on the buy page. You cannot change the value of this parameter.
Version
Select 3.0.
Database
The ID of the destination AnalyticDB for MySQL cluster.
Database Account
The database account of the AnalyticDB for MySQL cluster. For information about the permissions that are required for the account, see Permissions required for database accounts.
Database Password
The password of the database account.
In the lower-right corner of the page, click Set Whitelist and Next.
If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the self-managed database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the IP address whitelist of the database to allow DTS to access the database. For more information, see Add the CIDR blocks of DTS servers.
WarningIf the CIDR blocks of DTS servers are automatically or manually added to the whitelist of the database or instance, or to the ECS security group rules, security risks may arise. Therefore, before you use DTS to synchronize data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or ECS security group rules and forbidding unauthorized CIDR blocks, or connecting the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.
Select the synchronization policy and the objects to be synchronized.
Parameter or setting
Description
Select the initial synchronization types
You must select both Initial Schema Synchronization and Initial Full Data Synchronization in most cases. After the precheck is complete, DTS synchronizes the schema and data of required objects from the source instance to the destination cluster. The schema and data are the basis for subsequent incremental synchronization.
Processing Mode In Existed Target Table
Precheck and Report Errors: checks whether the source and destination databases contain tables that share the same names. If the destination database does not contain tables that have the same names as those in the source database, the precheck is passed. Otherwise, an error is returned during precheck and the data synchronization task cannot be started.
NoteYou can use the object name mapping feature to rename the tables that are synchronized to the destination database. If the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed, you can use this feature. For more information, see Rename an object to be synchronized.
Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
WarningIf you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks.
If the source and destination databases have the same schema, DTS does not synchronize data records that have the same primary keys as data records in the destination database.
If the source and destination databases have different schemas, initial data migration may fail. In this case, only specific columns are migrated, or the data migration task fails.
Merge Multi Tables
If you select Yes, DTS adds the
__dts_data_source
column to each table to store data sources. In this case, DDL operations cannot be synchronized.No is selected by default. In this case, DDL operations can be synchronized.
NoteIf you set this parameter to Yes, all of the selected source tables in the task are merged into a destination table. If you want to merge only part of the source tables, you can create two data synchronization tasks.
Select the operation types to be synchronized
Select the types of operations that you want to synchronize based on your business requirements. All operation types are selected by default. For more information, see SQL operations that can be synchronized.
Select the objects to be synchronized
Select one or more objects from the Available section and click the icon to add the objects to the Selected section.
You can select tables or databases as the objects to be synchronized.
NoteIf you select a database as the object to be synchronized, all schema changes in the database are synchronized to the destination database.
If you select a table as the object to be synchronized, only the ADD COLUMN operations that are performed on the table are synchronized to the destination database.
By default, after an object is synchronized to the destination database, the name of the object remains unchanged. You can use the object name mapping feature to rename the objects that are synchronized to the destination cluster. For more information, see Rename an object to be synchronized.
Rename Databases and Tables
You can use the object name mapping feature to rename the objects that are synchronized to the destination instance. For more information, see Object name mapping.
Replicate Temporary Tables When DMS Performs DDL Operations
If you use DMS to perform online DDL operations on the source database, you can specify whether to synchronize temporary tables generated by online DDL operations.
Yes: DTS synchronizes the data of temporary tables generated by online DDL operations.
NoteIf online DDL operations generate a large amount of data, the data synchronization task may be delayed.
No: DTS does not synchronize the data of temporary tables generated by online DDL operations. Only the original DDL data of the source database is synchronized.
NoteIf you select No, the tables in the destination database may be locked.
Retry Time for Failed Connections
By default, if DTS fails to connect to the source or destination database, DTS retries within the next 720 minutes (12 hours). You can specify the retry time based on your needs. If DTS reconnects to the source and destination databases within the specified time, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.
NoteWhen DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time based on your business needs. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.
In the lower-right corner of the page, click Next.
Specify a type for the tables that you want to synchronize to the destination database.
NoteAfter you select Initial Schema Synchronization, you must specify the type, primary key column, and partition key column for the tables that you want to synchronize to the destination AnalyticDB for MySQL cluster. For more information, see CREATE TABLE.
In the lower-right corner of the page, click Precheck.
NoteBefore you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.
If the task fails to pass the precheck, you can click the icon next to each failed item to view details.
After you troubleshoot the issues based on the details, initiate a new precheck.
If you do not need to troubleshoot the issues, ignore the failed items and initiate a new precheck.
Close the Precheck dialog box after the following message is displayed: Precheck Passed. Then, the data synchronization task starts.
Wait until initial synchronization is completed and the data synchronization task enters the Synchronizing state.
You can view the status of the data synchronization task on the Synchronization Tasks page.
Troubleshoot the synchronization failure that occurs due to field type changes
If the data type of a field in the source table is changed during data synchronization, an error message is reported and the data synchronization task is interrupted. You can troubleshoot the issue by using the following method.
Create a table in the destination cluster based on the schema of source table that fails to be synchronized. For example, if a table named customer (Table A) fails to be synchronized, you can create a table named customer_new (Table B) in the destination cluster. Make sure that Table B has the same schema as Table A.
Run the INSERT INTO SELECT command to copy the data of Table A and insert the data into Table B. This ensures that the data of the two tables is consistent.
Rename or delete Table A. Then, change the name of Table B to customer.
Restart the data synchronization task in the DTS console.