All Products
Search
Document Center

ApsaraDB RDS:Migrate data from a self-managed MySQL database to an ApsaraDB RDS for MySQL instance

Last Updated:Sep 16, 2025

You can use Data Transmission Service (DTS) to migrate your self-managed MySQL databases from on-premises data centers, Elastic Compute Service (ECS) instances, or other clouds to ApsaraDB RDS for MySQL with minimal to zero downtime.

Prerequisites

  • The source self-managed MySQL database must run MySQL 5.1, 5.5, 5.6, 5.7, or 8.0.

  • The destination RDS for MySQL instance must have more available storage space than the source self-managed database.

Overview of migration solutions

DTS provides the following three migration types:

  • Schema migration: Migrates the schemas of databases, tables, views, triggers, stored procedures, and functions.

  • Full data migration: Migrates all existing data from the source database.

  • Incremental data migration: Migrates incremental data from the source database to the destination RDS instance, capturing changes that occur after the full data migration begins.

To achieve a smooth, zero-downtime migration for your application, we recommend combining schema migration, full data migration, and incremental data migration in your solution.

Common migration solutions

Migration solution

Downtime

Data consistency

Cost

Use cases

Schema + full + incremental data migration (Recommended)

Zero downtime

Consistent after the migration is complete regardless of whether data is written to the source during migration.

Paid

  • Businesses that require zero or minimal migration downtime.

  • Production environments.

Schema + full data migration

Depends on the time required for full data migration

  • Inconsistent if data is written to the source during migration.

  • Consistent if no data is written to the source during migration.

Free

  • Businesses that can tolerate migration downtime.

  • Test environments.

Billing

When you use DTS to migrate data from the source database to an RDS for MySQL instance, schema migration, full data migration, and public network traffic are free of charge. You are charged only for the following items:

  • Incremental data migration: You are charged while the incremental migration is running. You are not charged when the incremental data migration is paused or has failed.

  • Data verification: You are charged data verification fees.

Limitations

  • Primary keys: All tables to be migrated must have a primary key or a unique constraint with unique values. Otherwise, the destination RDS instance may contain duplicate data.

  • DDLs: Do not perform DDL operations, such as changing database or table schemas, during schema migration and full data migration. Otherwise, the migration task will fail.

  • DMLs: If you perform only a full data migration, do not use DMLs to write new data to the source database during the migration. Otherwise, the data in the source and destination will be inconsistent.

  • Number of tables: If you need to edit column name mappings, you can migrate up to 1,000 tables in a single migration task. To migrate more tables, create more migration tasks.

  • Character sets: If the data to be migrated contains four-byte characters, such as rare characters or emojis, the destination RDS instance and its tables must use the utf8mb4 character set. For schema migration, set the character_set_server parameter to utf8mb4 for the destination RDS instance.

  • Database accounts: To migrate accounts from the source database, you must check whether the account used for migration meets the requirements described in Migrate database accounts.

  • Primary/secondary switchover: If the source database performs a switchover during migration, the migration will fail.

  • Invisible columns in MySQL 8.0: If the source database runs MySQL 8.0.23 or later and contains invisible columns, including those automatically generated for tables without a primary key, you must first make them visible by running ALTER TABLE ... ALTER COLUMN ... SET VISIBLE;. Otherwise, data will be lost during the migration.

  • Content that cannot be migrated:

    • Parsers defined using comment syntax.

    • Data generated by operations that are not recorded in binary logs, such as physical backup and recovery or foreign key cascade operations.

Click to expand for more specific limitations

Category

Description

Limits on the source database

  • The server on which the source database is deployed must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.

  • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • If you select tables as the objects to be migrated and you want to modify the tables in the destination database, such as renaming tables or columns, you can migrate up to 1,000 tables in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables or configure a task to migrate the entire database.

  • If you want to migrate incremental data, the following requirements for binary logs must be met:

    • The binary logging feature is enabled. The binlog_format parameter is set to row and the binlog_row_image parameter is set to full. Otherwise, error messages are returned during the precheck and the data migration task fails to be started.

      Important

      If the source database is a self-managed MySQL database deployed in a dual-primary cluster, you must set the log_slave_updates parameter to ON. This ensures that DTS can obtain all binary logs.

    • The binary logs of an ApsaraDB RDS for MySQL instance must be stored for at least three days. We recommend that you store the binary logs seven days. The binary logs of a self-managed MySQL database must be stored for at least seven days. Otherwise, DTS may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you configure the retention period of binary logs based on the preceding requirements. Otherwise, the service reliability or performance stated in the Service Level Agreement (SLA) of DTS may not be guaranteed.

      Note

      For more information about how to set Retention Period of the binary logs of an ApsaraDB RDS for MySQL instance, see Configure parameters based on which the system automatically deletes the binary log files of an RDS instance.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

  • The data generated by change operation of binary logs, such as data restored from a physical backup or data from a cascade operation, is not recorded and migrated to the destination database when the data migration instance is running.

    Note

    If the change data is not recorded and migrated to the destination database, you can migrate full data again on the premise that your business is not affected.

  • If the source database is MySQL database 8.0.23 or later, and the data to be migrated includes invisible columns, the data of the columns cannot be obtained and data loss occurs.

    Note
    • To make the columns visible, run the ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE; command. For more information, see Invisible Columns.

    • Tables that do not have primary keys automatically generate invisible primary keys. You need to make the invisible primary keys visible. For more information, see Generated Invisible Primary Keys.

Other limits

  • To ensure compatibility, the versions of the source and destination MySQL databases must be the same.

  • Indexes and partitions cannot be migrated.

  • DTS does not migrate data where a parser defined by using comments is used.

  • If the destination database is MySQL database 8.0.23 or later, and the columns to receive data include invisible columns, the destination columns to which the data is written cannot be found. In this case, the DTS instance fails to run and data loss occurs.

    Note
    • To make the columns visible, run the ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE; command. For more information, see Invisible Columns.

    • Tables that do not have primary keys automatically generate invisible primary keys. You need to make the invisible primary keys visible. For more information, see Generated Invisible Primary Keys.

  • If you do not use the schema migration feature of DTS in the data migration scenarios, you must ensure the compatibility of the fields. For example, if the field of a source table is of the text type, and the field of a destination table is of the varchar(255) type, the data may be truncated when the source table contains large fields.

  • If the data to be migrated contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.

    Note

    If you use the schema migration feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set.

  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is complete, the tablespace of the destination database is larger than that of the source database.

  • You must make sure that the precision settings for columns of the FLOAT or DOUBLE data type meet your business requirements. DTS uses the ROUND(COLUMN,PRECISION) function to retrieve values from columns of the FLOAT or DOUBLE data type. If you do not specify a precision, DTS sets the precision for columns of the FLOAT data type to 38 digits and the precision for columns of the DOUBLE data type to 308 digits.

  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads to the destination database, you must stop or release the failed task. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after a failed task is resumed.

  • If DDL statements fail to be executed in the destination database, the DTS task continues to run. You can view the DDL statements that fail to be executed in task logs. For more information about how to view task logs, see View task logs.

  • If you write column names that differ only in capitalization to the same table in the destination MySQL database, the data migration result may not meet your expectations because the column names in MySQL databases are not case-sensitive.

  • After data migration is complete, that is, the Status of the instance changes to Completed, we recommend that you run the analyze table <table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the destination MySQL database, data may be written only to the memory. As a result, data loss occurs.

  • If the source database is an ApsaraDB RDS for MySQL instance for which the EncDB feature is enabled, full data migration cannot be performed.

    Note

    ApsaraDB RDS for MySQL instances that have the Transparent Data Encryption (TDE) feature enabled support schema migration, full data migration, and incremental data migration.

  • If you want to migrate accounts from the source database to the destination database, you need to learn the prerequisites and precautions. For more information, see Migrate database accounts.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

  • If the source database is a self-managed MySQL database, take note of the following limits:

    • If you perform a primary/secondary switchover on the source database when the data migration task is running, the task fails.

    • DTS calculates migration latency based on the timestamp of the latest migrated data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the migration latency may be inaccurate. If the latency of the data migration task is excessively high, you can perform a DML operation on the source database to update the latency.

      Note

      If you select an entire database as the object to be migrated, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

    • DTS executes the CREATE DATABASE IF NOT EXISTS `test` statement in the source database as scheduled to move forward the binary log file position.

  • If the source database is an Apsara RDS for MySQL instance, take note of the following limit:

    • In incremental data migration, an ApsaraDB RDS for MySQL instance that does not record transaction logs, such as a read-only ApsaraDB RDS for MySQL V5.6 instance, cannot be used as the source database.

    • DTS executes the CREATE DATABASE IF NOT EXISTS `test` statement in the source database as scheduled to move forward the binary log file position.

  • If the destination database is an ApsaraDB RDS for MySQL instance, take note of the following limit:

    DTS automatically creates a database in the destination ApsaraDB RDS for MySQL instance. However, if the name of the source database does not comply with the database naming conventions of ApsaraDB RDS for MySQL, you must manually create a database in the destination ApsaraDB RDS for MySQL instance before you configure the data migration task. For more information, see Manage databases.

Phase 1: Prepare for the migration

Step 1: Authorize DTS to access cloud resources

  1. Navigate to the quick authorization page with your Alibaba Cloud account and click Authorize.

  2. If you see the EntityAlreadyExists.Role and EntityAlreadyExists.Role.Policy messages, the authorization is complete. Proceed with the next steps.

    screenshot_2025-03-21_13-37-47

Step 2: Create and authorize database accounts

Account for the source database

Execute the following statements on the source database to create the account:

-- Create an account for the source database. Replace dts_user and Your_Password123 with actual values.
CREATE USER 'dts_user'@'%' IDENTIFIED BY 'Your_Password123';

-- Grant the account permissions for schema and full data migration.
GRANT SELECT ON *.* TO 'dts_user'@'%';

-- Grant the account permissions for incremental data migration.
GRANT REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW ON *.* TO 'dts_user'@'%';

-- Grant the account permissions to create heartbeat tables to move forward binary log file positions.
GRANT CREATE ON *.* TO 'dts_user'@'%'; 

FLUSH PRIVILEGES;

Privileged account for the destination RDS instance

  1. Navigate to the Instances page of the RDS console, select the region at the top, and then click the ID of the destination RDS instance.

  2. In the left navigation pane, click Accounts, and then click Create Account.

  3. Select Privileged Account for Account Type and configure other parameters as prompted.

You can also use existing database accounts with the privileges listed in the following table for migration:

Database account

Schema migration

Full data migration

Incremental data migration

Account for the source database

SELECT privilege

  • SELECT privilege on the objects to be migrated.

  • REPLICATION CLIENT, REPLICATION SLAVE, and SHOW VIEW privileges.

  • Privileges to create databases and tables on the source database. These privileges allow DTS to create the test database for moving forward the binary log position.

Account for the destination RDS instance

Read and write permissions

Step 3: Configure access to the source database

Select an access method for the source database and complete corresponding configurations.

Source database

Recommended access method

Configuration

On-premises database with a public IP address

Public IP

Add the CIDR blocks of DTS servers to the IP whitelist of the source database.

On-premises database without a public IP address

  • Cloud Enterprise Network (CEN)

  • Database Gateway

  • VPN Gateway/Express Connect/Smart Access Gateway (SAG)

Database hosted on an ECS instance

ECS instance

No manual configurations are required.

Step 4: Configure binary logging for the source database (for incremental data migration only)

To perform incremental migration, you must configure binary logging for the source database as follows and then restart MySQL for the configuration to take effect:

  • Enable binary logging for the source database and retain logs for at least 7 days.

  • Set the binlog_format parameter of the source database to row and the binlog_row_image parameter to full.

  • If the source database is a dual-master cluster, enable the log_slave_updates parameter to ensure DTS can obtain all binary logs.

To configure binary logging for the source database, follow these steps:

Linux

  1. Use vim to modify the following parameters in the my.cnf file:

    log_bin=mysql_bin
    binlog_format=row
    
    # For MySQL versions earlier than 8.0, configure expire_logs_days to set the retention period of binary logs. Default value: 0 (never expires).
    # For MySQL 8.0 and later versions, configure binlog_expire_logs_seconds to set the retention period of binary logs. The default is 2592000 seconds (30 days).
    # If the current binary log retention period is less than 7 days, you can reset it to 7 days or more by configuring the following parameters:
    # expire_logs_days=7
    # binlog_expire_logs_seconds=604800
    
    # Set this parameter to an integer greater than 1.
    server_id=2
    
    # This parameter must be set if the source database runs a MySQL version later than 5.6.
    binlog_row_image=full
    
    # Set this parameter to ON if the source database is a dual-master cluster.
    # log_slave_updates=ON
  2. Restart the MySQL process:

    /etc/init.d/mysqld restart

Windows

  1. Modify the following parameters in the my.ini file:

    log_bin=mysql_bin
    binlog_format=row
    
    # For MySQL versions earlier than 8.0, configure expire_logs_days to set the retention period of binary logs. Default value: 0 (never expires).
    # For MySQL 8.0 and later versions, configure binlog_expire_logs_seconds to set the retention period of binary logs. The default is 2592000 seconds (30 days).
    # If the current binary log retention period is less than 7 days, you can reset it to 7 days or more by configuring the following parameters:
    # expire_logs_days=7
    # binlog_expire_logs_seconds=604800
    
    # Set this parameter to an integer greater than 1.
    server_id=2
    
    # This parameter must be set if the source database runs a MySQL version later than 5.6.
    binlog_row_image=full
    
    # Set this parameter to ON if the source database is a dual-master cluster.
    # log_slave_updates=ON
  2. Restart the MySQL service from the Windows Service Manager or run the following commands:

    net stop mysql
    net start mysql

Phase 2: Configure the migration task

  1. Log on to the DTS console, select Data Migration in the left navigation pane, and then click Create Task.

  2. Configure the source database and destination RDS instance.

    Source database

    Parameter

    Description

    Database Type

    Select MySQL.

    Access Method

    Select an access method for the source database. Example: Public IP.

    Instance Region

    Select the region where the source database is located.

    Domain Name or IP

    Enter the public endpoint or IP address of the source database.

    Port

    Enter the service port of the source database. Default value: 3306.

    Database Account

    Enter the account created for the source database.

    Database Password

    Enter the password for the account.

    Encryption

    Specifies whether to encrypt the connection to the source database.

    • If SSL encryption is not enabled for the source database, select Non-encrypted.

    • If SSL encryption is enabled for the source database, select SSL-encrypted. In this case, you must upload a CA Certificate and configure the CA Key parameter.

    Destination RDS instance

    Parameter

    Description

    Database Type

    Select MySQL.

    Access Method

    Select Alibaba Cloud Instance.

    Instance Region

    Select the region of the RDS instance.

    Replicate Data Across Alibaba Cloud Accounts

    Select No.

    RDS Instance ID

    Select the ID of the RDS instance.

    Database Account

    Enter the privileged account created for the RDS instance.

    Database Password

    Enter the password for the account.

    Encryption

    Specifies whether to encrypt the connection to the source database instance. Select Non-encrypted or SSL-encrypted. If you select SSL-encrypted, you must enable SSL encryption for the destination RDS instance.

  3. Test the database connections.

    Click Test Connectivity and Proceed at the bottom of the page. In the displayed dialog box, click Test Connectivity. If the test fails, fix the issue based on the error message.

  4. Configure the objects to migrate.

    Configure the settings on the Configure Objects, Advanced Configurations, and Data Verification tabs.

    Configure Objects

    Configure the parameters in the following table. Then, click Next: Advanced Settings.

    Parameter

    Description

    Migration Types

    • To perform only full data migration, select Schema Migration and Full Data Migration.

    • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.

    Note
    • If you do not select Schema Migration, make sure a database and a table are created in the destination database to receive data and the object name mapping feature is enabled in Selected Objects.

    • If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.

    Method to Migrate Triggers in Source Database

    The method that is used to migrate triggers from the source database. You can select a migration method based on your business requirements. If no triggers are to be migrated, you do not need to configure this parameter. For more information, see Synchronize or migrate triggers from the source database.

    Note

    This parameter is available only if you select Schema Migration and Incremental Data Migration for the Migration Types parameter.

    Enable Migration Assessment

    Specifies whether to enable migration assessment. Migration assessment aims to check whether the schemas of the source and destination databases, such as the length of indexes, stored procedures, and dependent tables, meet the requirements. You can select Yes or No based on your business requirements.

    Note
    • You can configure this parameter only if you select Schema Migration for the Migration Types parameter.

    • If you select Yes, the precheck may take more time. You can view Assessment Result during the precheck. The assessment results do not affect the precheck results.

    Processing Mode of Conflicting Tables

    • Precheck and Report Errors: checks whether the destination database contains tables that use the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.

      Note

      If the source and destination databases contain tables with identical names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are migrated to the destination database. For more information, see Map object names.

    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to the following potential risks:

      • If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur:

        • During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained.

        • During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.

      • If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.

    Whether to migrate Event

    Specifies whether to migrate events from the source database. If you select Yes, you must complete the subsequent operations. For more information, see Synchronize or migrate events.

    Capitalization of Object Names in Destination Instance

    The capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to make sure that the capitalization of object names is consistent with that of the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.

    Source Objects

    Select one or more objects from the Source Objects section. Click the Rightwards arrow icon and add the objects to the Selected Objects section.

    Note

    You can select columns, tables, or databases as the objects to be migrated. If you select tables or columns as the objects to be migrated, DTS does not migrate other objects, such as views, triggers, or stored procedures, to the destination database.

    Selected Objects

    • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.

    • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.

    Note
    • If you use the object name mapping feature to rename an object, other objects that are dependent on the object may fail to be migrated.

    • To filter data by using a WHERE condition, right-click the table to be migrated in the Selected Objects section and configure the filter condition in the dialog box that appears. For more information, see Filter the data to be migrated.

    • To select the SQL operations to be migrated at the database or table level, right-click the object to be migrated in the Selected Objects section and select the SQL operations in the dialog box that appears. For more information about the supported operations, see SQL operations that support incremental data migration.

    (Optional) Advanced Configurations

    Configure the advanced settings to fine-tune the migration task. If you do not need to configure these settings, keep the default values and click Next: Data verification.

    Parameter

    Description

    Dedicated Cluster for Task Scheduling

    By default, DTS schedules the data migration task to the shared cluster if you do not specify a dedicated cluster. If you want to improve the stability of data migration tasks, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster.

    Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database.

    If you use DMS or the gh-ost tool to perform online DDL operations on the source database, you can specify whether to migrate the data of temporary tables generated by online DDL operations. Valid values:

    Important

    You cannot use tools such as pt-online-schema-change to perform online DDL operations on the source database. Otherwise, the DTS task fails.

    • Yes: DTS migrates the data of temporary tables generated by online DDL operations.

      Note

      If online DDL operations generate a large amount of data, latency may occur for the data migration task.

    • No, Adapt to DMS Online DDL: DTS does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL operations that are performed by using DMS are migrated.

      Note

      If you select this option, the tables in the destination database may be locked.

    • No, Adapt to gh-ost: DTS does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL operations that are performed by using the gh-ost tool are migrated. You can use the default or custom regular expressions to filter out the shadow tables of the gh-ost tool and tables that are not required.

      Note

      If you select this option, the tables in the destination database may be locked.

    Whether to Migrate Accounts

    Specifies whether to migrate the account information of the source database. You can configure this parameter based on your business requirements. If you select Yes, you must select the accounts that you want to migrate and check the permissions of the source and destination database accounts used in the data migration task.

    Retry Time for Failed Connections

    The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the retry time range. Valid values: 10 to 1,440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS is reconnected to the source and destination databases within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

    Note
    • If you specify different retry time ranges for multiple data migration tasks that share the same source or destination database, the value that is specified later takes precedence.

    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at the earliest opportunity after the source database and destination instance are released.

    Retry Time for Other Issues

    The retry time range for other issues. For example, if DDL or DML operations fail to be performed after the data migration task is started, DTS immediately retries the operations within the retry time range. Valid values: 1 to 1440. Unit: minutes. Default value: 10. We recommend that you set the parameter to a value greater than 10. If the failed operations are successfully performed within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

    Important

    The value of the Retry Time for Other Issues parameter must be smaller than the value of the Retry Time for Failed Connections parameter.

    Enable Throttling for Full Data Migration

    Specifies whether to enable throttling for full data migration. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of the database servers. You can enable throttling for full data migration based on your business requirements. To configure throttling, you must configure the Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) parameters. This reduces the loads of the destination database server.

    Note

    You can configure this parameter only if you select Full Data Migration for the Migration Types parameter.

    Enable Throttling for Incremental Data Migration

    Specifies whether to enable throttling for incremental data migration. To configure throttling, you must configure the RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) parameters. This reduces the loads of the destination database server.

    Note

    You can configure this parameter only if you select Incremental Data Migration for the Migration Types parameter.

    Environment Tag

    The environment tag that is used to identify the DTS instance. You can select an environment tag based on your business requirements. In this example, you do not need to configure this parameter.

    Whether to delete SQL operations on heartbeat tables of forward and reverse tasks

    Specifies whether to write SQL operations on heartbeat tables to the source database while the DTS instance is running. Valid values:

    • Yes: does not write SQL operations on heartbeat tables. In this case, a latency of the DTS instance may be displayed.

    • No: writes SQL operations on heartbeat tables. In this case, features such as physical backup and cloning of the source database may be affected.

    Configure ETL

    Specifies whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

    Monitoring and Alerting

    Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts receive notifications. Valid values:

    (Optional) Data Verification

    Data verification monitors and compares data between the source and destination databases to identify inconsistencies. If you do not require data verification, proceed to the next step.

    Data verification method

    Cost

    Description

    Full Data Verification

    Charged

    Verifies the data in the full data migration.

    Incremental Data Verification

    Verifies the data in the incremental data migration.

    Schema Verification

    Free

    Performs schema verification on the objects to be migrated.

    You can configure the data verification feature when configuring the migration task. You can also configure an individual data verification task after the migration is complete. You can skip this step as required.

    Click to expand data verification configuration details

    1. Data Verification Mode: Select one or more data verification methods based on your business needs.

      Full Data Verification

      Configure the following parameters if you select Full Data Verification.

      Parameters

      Description

      Full Data Verification

      • Full field validation by row sampling: Configure a sampling percentage (an integer from 10 to 100) to perform full-field verification on the sampled data.

      • Verify based on the number of table rows: Verifies the row count of the full data migration without verifying the data.

      Note

      The Verify based on the number of table rows mode is free. The Full field validation by row sampling mode is charged based on the actual amount of verified data.

      Full Data Verification Time Rule

      Only Start Now is supported.

      Timeout Settings for Full Data Verification

      • No: The full verification task will not be forcibly ended if it times out. 

      • Yes: Set a timeout duration for the full verification task. The timer starts when the validation task begins, and if the task is not completed within the specified time, it is forcibly ended. The value is an integer from 1 to 72.

      Full calibration reference

      • Default: Uses the union of the source and destination databases as the baseline to verify data consistency. 

      • Source Database: Uses the source database as the baseline to verify that the destination database is consistent with the source (does not check for extra data in the destination RDS instance). 

      • Destination Database: Uses the destination RDS instance as the baseline to verify that the source database is consistent with the destination RDS instance (does not check for extra data in the source database).

      Maximum number of rows of data read per second by full verification (RPS)

      Full data verification consumes some read resources from the database. You can set rate limits for the verification task (rows per second and bytes per second) to reduce the workloads on the database.

      Note

      A value of 0 means no limit is set. If both Maximum number of rows of data read per second by full verification (RPS) and The maximum number of bytes read per second by full verification Byte/s. are 0, the speed is not limited.

      The maximum number of bytes read per second by full verification Byte/s.

      Incremental Data Verification

      If you select Incremental Data Verification, you must also configure the following parameter:

      Parameter

      Description

      Incremental Verification Benchmark

      You can filter the DML operations to be verified based on your needs.

    2. Specify Verification Objects.

      In the Selected Objects section, you can select the objects that do not require data verification and click 移除 to remove them.

      Note

      By default, the objects to be synchronized or migrated are added to the Selected Objects section.

    3. Configure alerting for data verification.

      Configure alerting for data verification based on your business requirements. The following table describes the parameters.

      Parameter

      Description

      Full Data Verification Alert

      • No: disables alerting.

      • Yes: enables alerting. You must also select and configure alert rules. The following information describes the alert rules:

        • An alert is triggered if full data verification fails.

        • An alert is triggered if the volume of inconsistent data detected by full data verification is larger than or equal to the specified threshold.

      Incremental Data Verification Alert

      • No: disables alerting.

      • Yes: enables alerting. You must also select and configure alert rules. The following information describes the alert rules:

        • An alert is triggered if incremental data verification fails.

        • An alert is triggered if the volume of inconsistent data detected by incremental data verification within the specified consecutive periods is larger than or equal to the specified threshold. You can specify the number of consecutive periods, the statistical period, and the threshold of the inconsistent data volume.

        • An alert is triggered if the latency of data migration or synchronization detected by incremental data verification within the specified consecutive periods is greater than or equal to the specified threshold. You can specify the number of consecutive periods, the statistical period, and the latency of data migration or synchronization.

      Important

      If you enable alerting for data verification and want to receive a notification when an alert is triggered, you must subscribe to alert messages in CloudMonitor. For more information, see Configure alert rules for DTS tasks in the CloudMonitor console.

Phase 3: Precheck and start migration

  1. After you complete the configurations, click Next: Save Task Settings and Precheck. DTS validates your environment and configuration.

  2. Wait for the precheck to complete.

    • If Success Rate reaches 100%, the environment is ready for migration.

    • If the precheck fails, click View Details, fix the issue, and run the precheck again.

    • Review any ignorable warnings to confirm they pose no risk, then ignore them and proceed.

  3. After the precheck passes, click Next: Purchase Instance.

  4. Select a Resource Group (default: default resource group)and an appropriate DTS instance specification.

  5. Read and select Data Transmission Service (Pay-As-You-Go) Terms of Service, click Purchase and Start, and then click Confirm. The task starts automatically.

Phase 4: Data verification and business switchover

  1. Wait for the migration task to progress.

    1. Tasks without incremental data migration have a Status of Completed when finished.

    2. Tasks with incremental data migration have a Status of Running and do not end automatically.

  2. Verify data consistency.

    After the full migration is complete and the incremental data migration latency is near zero, verify the data consistency between the source and destination databases.

    1. Automatic verification: Configure a data verification task in DTS to automatically compare the data in the source and destination databases.

    2. Manual verification: Compare the number of rows in tables and core business data between the source database and destination RDS instance. Sample code:

      -- Example 1: Execute the following statement on the source database and destination RDS instance respectively to compare the number of table rows.
      SELECT COUNT(*) FROM your_table;
      
      -- Example 2: Execute the following statement on the source database and destination RDS instance respectively to compare key business indicators, such as the total order amount within a certain period.
      SELECT SUM(amount) FROM orders WHERE create_time >= '2024-01-01';
  3. Switch over your business.

    During off-peak hours, stop your applications, ensure the incremental data migration latency is zero, and then change your applications' database connection strings to the endpoint of the destination RDS instance. After the switchover is complete, release the migration task.

Appendix 1: Migration types

Migration type

Description

Schema migration

Migrates the structure definitions of objects like tables, views, and stored procedures. DTS converts the DEFINER in views and stored procedures to INVOKER in the destination database.

Full data migration

Migrates all existing data from the source database to the destination RDS instance.

Incremental data migration

After a full migration, this type continuously migrates new data changes from the source to the destination, enabling migration with zero downtime.

Appendix 2: Supported SQL operations during incremental data migration

Operation type

SQL statement

DML

INSERT, UPDATE, and DELETE

DDL

  • ALTER TABLE and ALTER VIEW

  • CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, and CREATE VIEW

  • DROP INDEX and DROP TABLE

  • RENAME TABLE

    Important

    RENAME TABLE operations may cause data inconsistency between the source and destination databases. For example, if you select a table as the object to be migrated and rename the table during data migration, the data of this table is not migrated to the destination database. To prevent this situation, you can select the database to which this table belongs as the object to be migrated when you configure the data migration task. Make sure that the databases to which the table belongs before and after the RENAME TABLE operation are added to the objects to be migrated.

  • TRUNCATE TABLE

FAQ

  • Q1: What do I do if the "JDBC: [conn_error, cause: null, message from server: "Host 'XXX' is not allowed to connect to this MySQL server"]; PING: []; TELNET: []; requestId=[XXX]" error occurs?

    This error indicates a Java Database Connectivity (JDBC) exception. Verify the account, password, and permissions for JDBC, or use a privileged account to test the connection.

  • Q2: Why cannot I select an RDS instance in the China (Fuzhou) region when I create a migration task?

    DTS does not support instances in the China (Fuzhou) region. As an alternative, you can back up a self-managed MySQL 5.7 or 8.0 database to the cloud.