You can use Data Transmission Service (DTS) to migrate your self-managed MySQL databases from on-premises data centers, Elastic Compute Service (ECS) instances, or other clouds to ApsaraDB RDS for MySQL with minimal to zero downtime.
Prerequisites
The source self-managed MySQL database must run MySQL 5.1, 5.5, 5.6, 5.7, or 8.0.
The destination RDS for MySQL instance must have more available storage space than the source self-managed database.
Overview of migration solutions
DTS provides the following three migration types:
Schema migration: Migrates the schemas of databases, tables, views, triggers, stored procedures, and functions.
Full data migration: Migrates all existing data from the source database.
Incremental data migration: Migrates incremental data from the source database to the destination RDS instance, capturing changes that occur after the full data migration begins.
To achieve a smooth, zero-downtime migration for your application, we recommend combining schema migration, full data migration, and incremental data migration in your solution.
Common migration solutions
Migration solution | Downtime | Data consistency | Cost | Use cases |
Schema + full + incremental data migration (Recommended) | Zero downtime | Consistent after the migration is complete regardless of whether data is written to the source during migration. | Paid |
|
Schema + full data migration | Depends on the time required for full data migration |
| Free |
|
Billing
When you use DTS to migrate data from the source database to an RDS for MySQL instance, schema migration, full data migration, and public network traffic are free of charge. You are charged only for the following items:
Incremental data migration: You are charged while the incremental migration is running. You are not charged when the incremental data migration is paused or has failed.
Data verification: You are charged data verification fees.
Limitations
Primary keys: All tables to be migrated must have a primary key or a unique constraint with unique values. Otherwise, the destination RDS instance may contain duplicate data.
DDLs: Do not perform DDL operations, such as changing database or table schemas, during schema migration and full data migration. Otherwise, the migration task will fail.
DMLs: If you perform only a full data migration, do not use DMLs to write new data to the source database during the migration. Otherwise, the data in the source and destination will be inconsistent.
Number of tables: If you need to edit column name mappings, you can migrate up to 1,000 tables in a single migration task. To migrate more tables, create more migration tasks.
Character sets: If the data to be migrated contains four-byte characters, such as rare characters or emojis, the destination RDS instance and its tables must use the
utf8mb4character set. For schema migration, set thecharacter_set_serverparameter toutf8mb4for the destination RDS instance.Database accounts: To migrate accounts from the source database, you must check whether the account used for migration meets the requirements described in Migrate database accounts.
Primary/secondary switchover: If the source database performs a switchover during migration, the migration will fail.
Invisible columns in MySQL 8.0: If the source database runs MySQL 8.0.23 or later and contains invisible columns, including those automatically generated for tables without a primary key, you must first make them visible by running
ALTER TABLE ... ALTER COLUMN ... SET VISIBLE;. Otherwise, data will be lost during the migration.Content that cannot be migrated:
Parsers defined using comment syntax.
Data generated by operations that are not recorded in binary logs, such as physical backup and recovery or foreign key cascade operations.
Phase 1: Prepare for the migration
Step 1: Authorize DTS to access cloud resources
Navigate to the quick authorization page with your Alibaba Cloud account and click Authorize.
If you see the
EntityAlreadyExists.RoleandEntityAlreadyExists.Role.Policymessages, the authorization is complete. Proceed with the next steps.
Step 2: Create and authorize database accounts
Account for the source database
Execute the following statements on the source database to create the account:
-- Create an account for the source database. Replace dts_user and Your_Password123 with actual values.
CREATE USER 'dts_user'@'%' IDENTIFIED BY 'Your_Password123';
-- Grant the account permissions for schema and full data migration.
GRANT SELECT ON *.* TO 'dts_user'@'%';
-- Grant the account permissions for incremental data migration.
GRANT REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW ON *.* TO 'dts_user'@'%';
-- Grant the account permissions to create heartbeat tables to move forward binary log file positions.
GRANT CREATE ON *.* TO 'dts_user'@'%';
FLUSH PRIVILEGES;Privileged account for the destination RDS instance
Navigate to the Instances page of the RDS console, select the region at the top, and then click the ID of the destination RDS instance.
In the left navigation pane, click Accounts, and then click Create Account.
Select Privileged Account for Account Type and configure other parameters as prompted.
You can also use existing database accounts with the privileges listed in the following table for migration:
Database account | Schema migration | Full data migration | Incremental data migration |
Account for the source database | SELECT privilege |
| |
Account for the destination RDS instance | Read and write permissions | ||
Step 3: Configure access to the source database
Select an access method for the source database and complete corresponding configurations.
Source database | Recommended access method | Configuration |
On-premises database with a public IP address | Public IP | Add the CIDR blocks of DTS servers to the IP whitelist of the source database. |
On-premises database without a public IP address |
|
|
Database hosted on an ECS instance | ECS instance | No manual configurations are required. |
Step 4: Configure binary logging for the source database (for incremental data migration only)
To perform incremental migration, you must configure binary logging for the source database as follows and then restart MySQL for the configuration to take effect:
Enable binary logging for the source database and retain logs for at least 7 days.
Set the
binlog_formatparameter of the source database torowand thebinlog_row_imageparameter tofull.If the source database is a dual-master cluster, enable the
log_slave_updatesparameter to ensure DTS can obtain all binary logs.
To configure binary logging for the source database, follow these steps:
Linux
Use
vimto modify the following parameters in themy.cnffile:log_bin=mysql_bin binlog_format=row # For MySQL versions earlier than 8.0, configure expire_logs_days to set the retention period of binary logs. Default value: 0 (never expires). # For MySQL 8.0 and later versions, configure binlog_expire_logs_seconds to set the retention period of binary logs. The default is 2592000 seconds (30 days). # If the current binary log retention period is less than 7 days, you can reset it to 7 days or more by configuring the following parameters: # expire_logs_days=7 # binlog_expire_logs_seconds=604800 # Set this parameter to an integer greater than 1. server_id=2 # This parameter must be set if the source database runs a MySQL version later than 5.6. binlog_row_image=full # Set this parameter to ON if the source database is a dual-master cluster. # log_slave_updates=ONRestart the MySQL process:
/etc/init.d/mysqld restart
Windows
Modify the following parameters in the
my.inifile:log_bin=mysql_bin binlog_format=row # For MySQL versions earlier than 8.0, configure expire_logs_days to set the retention period of binary logs. Default value: 0 (never expires). # For MySQL 8.0 and later versions, configure binlog_expire_logs_seconds to set the retention period of binary logs. The default is 2592000 seconds (30 days). # If the current binary log retention period is less than 7 days, you can reset it to 7 days or more by configuring the following parameters: # expire_logs_days=7 # binlog_expire_logs_seconds=604800 # Set this parameter to an integer greater than 1. server_id=2 # This parameter must be set if the source database runs a MySQL version later than 5.6. binlog_row_image=full # Set this parameter to ON if the source database is a dual-master cluster. # log_slave_updates=ONRestart the MySQL service from the Windows Service Manager or run the following commands:
net stop mysql net start mysql
Phase 2: Configure the migration task
Log on to the DTS console, select Data Migration in the left navigation pane, and then click Create Task.
Configure the source database and destination RDS instance.
Source database
Parameter
Description
Database Type
Select MySQL.
Access Method
Select an access method for the source database. Example: Public IP.
Instance Region
Select the region where the source database is located.
Domain Name or IP
Enter the public endpoint or IP address of the source database.
Port
Enter the service port of the source database. Default value: 3306.
Database Account
Enter the account created for the source database.
Database Password
Enter the password for the account.
Encryption
Specifies whether to encrypt the connection to the source database.
If SSL encryption is not enabled for the source database, select Non-encrypted.
If SSL encryption is enabled for the source database, select SSL-encrypted. In this case, you must upload a CA Certificate and configure the CA Key parameter.
Destination RDS instance
Parameter
Description
Database Type
Select MySQL.
Access Method
Select Alibaba Cloud Instance.
Instance Region
Select the region of the RDS instance.
Replicate Data Across Alibaba Cloud Accounts
Select No.
RDS Instance ID
Select the ID of the RDS instance.
Database Account
Enter the privileged account created for the RDS instance.
Database Password
Enter the password for the account.
Encryption
Specifies whether to encrypt the connection to the source database instance. Select Non-encrypted or SSL-encrypted. If you select SSL-encrypted, you must enable SSL encryption for the destination RDS instance.
Test the database connections.
Click Test Connectivity and Proceed at the bottom of the page. In the displayed dialog box, click Test Connectivity. If the test fails, fix the issue based on the error message.
Configure the objects to migrate.
Configure the settings on the Configure Objects, Advanced Configurations, and Data Verification tabs.
Configure Objects
Configure the parameters in the following table. Then, click Next: Advanced Settings.
Parameter
Description
Migration Types
To perform only full data migration, select Schema Migration and Full Data Migration.
To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.
NoteIf you do not select Schema Migration, make sure a database and a table are created in the destination database to receive data and the object name mapping feature is enabled in Selected Objects.
If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.
Method to Migrate Triggers in Source Database
The method that is used to migrate triggers from the source database. You can select a migration method based on your business requirements. If no triggers are to be migrated, you do not need to configure this parameter. For more information, see Synchronize or migrate triggers from the source database.
NoteThis parameter is available only if you select Schema Migration and Incremental Data Migration for the Migration Types parameter.
Enable Migration Assessment
Specifies whether to enable migration assessment. Migration assessment aims to check whether the schemas of the source and destination databases, such as the length of indexes, stored procedures, and dependent tables, meet the requirements. You can select Yes or No based on your business requirements.
NoteYou can configure this parameter only if you select Schema Migration for the Migration Types parameter.
If you select Yes, the precheck may take more time. You can view Assessment Result during the precheck. The assessment results do not affect the precheck results.
Processing Mode of Conflicting Tables
Precheck and Report Errors: checks whether the destination database contains tables that use the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.
NoteIf the source and destination databases contain tables with identical names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are migrated to the destination database. For more information, see Map object names.
Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
WarningIf you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to the following potential risks:
If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur:
During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained.
During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.
If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.
Whether to migrate Event
Specifies whether to migrate events from the source database. If you select Yes, you must complete the subsequent operations. For more information, see Synchronize or migrate events.
Capitalization of Object Names in Destination Instance
The capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to make sure that the capitalization of object names is consistent with that of the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.
Source Objects
Select one or more objects from the Source Objects section. Click the
icon and add the objects to the Selected Objects section. NoteYou can select columns, tables, or databases as the objects to be migrated. If you select tables or columns as the objects to be migrated, DTS does not migrate other objects, such as views, triggers, or stored procedures, to the destination database.
Selected Objects
To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
NoteIf you use the object name mapping feature to rename an object, other objects that are dependent on the object may fail to be migrated.
To filter data by using a WHERE condition, right-click the table to be migrated in the Selected Objects section and configure the filter condition in the dialog box that appears. For more information, see Filter the data to be migrated.
To select the SQL operations to be migrated at the database or table level, right-click the object to be migrated in the Selected Objects section and select the SQL operations in the dialog box that appears. For more information about the supported operations, see SQL operations that support incremental data migration.
(Optional) Advanced Configurations
Configure the advanced settings to fine-tune the migration task. If you do not need to configure these settings, keep the default values and click Next: Data verification.
Parameter
Description
Dedicated Cluster for Task Scheduling
By default, DTS schedules the data migration task to the shared cluster if you do not specify a dedicated cluster. If you want to improve the stability of data migration tasks, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster.
Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database.
If you use DMS or the gh-ost tool to perform online DDL operations on the source database, you can specify whether to migrate the data of temporary tables generated by online DDL operations. Valid values:
ImportantYou cannot use tools such as pt-online-schema-change to perform online DDL operations on the source database. Otherwise, the DTS task fails.
Yes: DTS migrates the data of temporary tables generated by online DDL operations.
NoteIf online DDL operations generate a large amount of data, latency may occur for the data migration task.
No, Adapt to DMS Online DDL: DTS does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL operations that are performed by using DMS are migrated.
NoteIf you select this option, the tables in the destination database may be locked.
No, Adapt to gh-ost: DTS does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL operations that are performed by using the gh-ost tool are migrated. You can use the default or custom regular expressions to filter out the shadow tables of the gh-ost tool and tables that are not required.
NoteIf you select this option, the tables in the destination database may be locked.
Whether to Migrate Accounts
Specifies whether to migrate the account information of the source database. You can configure this parameter based on your business requirements. If you select Yes, you must select the accounts that you want to migrate and check the permissions of the source and destination database accounts used in the data migration task.
Retry Time for Failed Connections
The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the retry time range. Valid values: 10 to 1,440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS is reconnected to the source and destination databases within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.
NoteIf you specify different retry time ranges for multiple data migration tasks that share the same source or destination database, the value that is specified later takes precedence.
When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at the earliest opportunity after the source database and destination instance are released.
Retry Time for Other Issues
The retry time range for other issues. For example, if DDL or DML operations fail to be performed after the data migration task is started, DTS immediately retries the operations within the retry time range. Valid values: 1 to 1440. Unit: minutes. Default value: 10. We recommend that you set the parameter to a value greater than 10. If the failed operations are successfully performed within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.
ImportantThe value of the Retry Time for Other Issues parameter must be smaller than the value of the Retry Time for Failed Connections parameter.
Enable Throttling for Full Data Migration
Specifies whether to enable throttling for full data migration. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of the database servers. You can enable throttling for full data migration based on your business requirements. To configure throttling, you must configure the Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) parameters. This reduces the loads of the destination database server.
NoteYou can configure this parameter only if you select Full Data Migration for the Migration Types parameter.
Enable Throttling for Incremental Data Migration
Specifies whether to enable throttling for incremental data migration. To configure throttling, you must configure the RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) parameters. This reduces the loads of the destination database server.
NoteYou can configure this parameter only if you select Incremental Data Migration for the Migration Types parameter.
Environment Tag
The environment tag that is used to identify the DTS instance. You can select an environment tag based on your business requirements. In this example, you do not need to configure this parameter.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasks
Specifies whether to write SQL operations on heartbeat tables to the source database while the DTS instance is running. Valid values:
Yes: does not write SQL operations on heartbeat tables. In this case, a latency of the DTS instance may be displayed.
No: writes SQL operations on heartbeat tables. In this case, features such as physical backup and cloning of the source database may be affected.
Configure ETL
Specifies whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:
Yes: configures the ETL feature. You can enter data processing statements in the code editor. For more information, see Configure ETL in a data migration or data synchronization task.
No: does not configure the ETL feature.
Monitoring and Alerting
Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts receive notifications. Valid values:
No: does not configure alerting.
Yes: configures alerting. In this case, you must also configure the alert threshold and alert notification settings. For more information, see the Configure monitoring and alerting when you create a DTS task section of the Configure monitoring and alerting topic.
(Optional) Data Verification
Data verification monitors and compares data between the source and destination databases to identify inconsistencies. If you do not require data verification, proceed to the next step.
Data verification method
Cost
Description
Full Data Verification
Charged
Verifies the data in the full data migration.
Incremental Data Verification
Verifies the data in the incremental data migration.
Schema Verification
Free
Performs schema verification on the objects to be migrated.
You can configure the data verification feature when configuring the migration task. You can also configure an individual data verification task after the migration is complete. You can skip this step as required.
Phase 3: Precheck and start migration
After you complete the configurations, click Next: Save Task Settings and Precheck. DTS validates your environment and configuration.
Wait for the precheck to complete.
If Success Rate reaches 100%, the environment is ready for migration.
If the precheck fails, click View Details, fix the issue, and run the precheck again.
Review any ignorable warnings to confirm they pose no risk, then ignore them and proceed.
After the precheck passes, click Next: Purchase Instance.
Select a Resource Group (default: default resource group)and an appropriate DTS instance specification.
Read and select Data Transmission Service (Pay-As-You-Go) Terms of Service, click Purchase and Start, and then click Confirm. The task starts automatically.
Phase 4: Data verification and business switchover
Wait for the migration task to progress.
Tasks without incremental data migration have a Status of Completed when finished.
Tasks with incremental data migration have a Status of Running and do not end automatically.
Verify data consistency.
After the full migration is complete and the incremental data migration latency is near zero, verify the data consistency between the source and destination databases.
Automatic verification: Configure a data verification task in DTS to automatically compare the data in the source and destination databases.
Manual verification: Compare the number of rows in tables and core business data between the source database and destination RDS instance. Sample code:
-- Example 1: Execute the following statement on the source database and destination RDS instance respectively to compare the number of table rows. SELECT COUNT(*) FROM your_table; -- Example 2: Execute the following statement on the source database and destination RDS instance respectively to compare key business indicators, such as the total order amount within a certain period. SELECT SUM(amount) FROM orders WHERE create_time >= '2024-01-01';
Switch over your business.
During off-peak hours, stop your applications, ensure the incremental data migration latency is zero, and then change your applications' database connection strings to the endpoint of the destination RDS instance. After the switchover is complete, release the migration task.
Appendix 1: Migration types
Migration type | Description |
Schema migration | Migrates the structure definitions of objects like tables, views, and stored procedures. DTS converts the |
Full data migration | Migrates all existing data from the source database to the destination RDS instance. |
Incremental data migration | After a full migration, this type continuously migrates new data changes from the source to the destination, enabling migration with zero downtime. |
Appendix 2: Supported SQL operations during incremental data migration
Operation type | SQL statement |
DML | INSERT, UPDATE, and DELETE |
DDL |
|
FAQ
Q1: What do I do if the "JDBC: [conn_error, cause: null, message from server: "Host 'XXX' is not allowed to connect to this MySQL server"]; PING: []; TELNET: []; requestId=[XXX]" error occurs?
This error indicates a Java Database Connectivity (JDBC) exception. Verify the account, password, and permissions for JDBC, or use a privileged account to test the connection.
Q2: Why cannot I select an RDS instance in the China (Fuzhou) region when I create a migration task?
DTS does not support instances in the China (Fuzhou) region. As an alternative, you can back up a self-managed MySQL 5.7 or 8.0 database to the cloud.
to remove them.