All Products
Search
Document Center

Data Transmission Service:Migrate a Self-managed SQL Server Database to AnalyticDB for PostgreSQL

Last Updated:Feb 07, 2026

The Data Transmission Service (DTS) migrates data from a self-managed SQL Server database to AnalyticDB for PostgreSQL. This facilitates real-time data analytics.

Prerequisites

  • You can configure this migration task only in the new console.

  • For supported versions of self-managed SQL Server databases, see Migration solutions.

  • You have created a destination AnalyticDB for PostgreSQL instance. If not, see Create an instance.

  • The storage space of the destination AnalyticDB for PostgreSQL instance must be larger than the storage space used by the self-managed SQL Server database.

  • If the RDS instance meets one of the following conditions, we recommend that you split the migration task into multiple subtasks.

    • The source instance contains more than 10 databases.

    • A single database of the source instance backs up its logs more than once per hour.

    • A single database of the source instance executes more than 100 DDL statements per hour.

    • Logs are written at a rate of more than 20 MB/s for a single database of the source instance.

    • The change data capture (CDC) feature needs to be enabled for more than 1,000 tables.

Important notes

Note
  • During schema migration, DTS migrates foreign keys from the source database to the destination database.

  • During full data migration and incremental data migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.

Type

Description

Source database limits

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • The tables to be migrated must have primary keys or UNIQUE constraints, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

  • If you migrate table-level objects and need to edit them, such as by mapping table and column names, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

  • A single data migration task supports a maximum of 10 databases. If you exceed this limit, stability and performance issues may occur. In this case, split the databases into multiple migration tasks.

  • If you configure a task to migrate specific objects instead of an entire database, you cannot migrate tables that have the same name but different schema names to the same destination database.

  • For incremental migration, data logs must meet the following requirements:

    • Logs must be enabled. The backup mode must be set to Full. A full physical backup must have been successfully performed.

    • For an incremental migration task, Data Transmission Service (DTS) requires that the data logs of the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that the data logs of the source database are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. Otherwise, the DTS task may fail because DTS cannot obtain the data logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period that is shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

  • To enable change data capture (CDC) for the tables to be migrated from the source database, the following conditions must be met. Otherwise, the precheck fails.

    • The value of the `srvname` field in the `sys.sysservers` view must be the same as the return value of the `SERVERPROPERTY` function.

    • If the source database is a self-managed SQL Server instance, the database owner must be `sa`. If the source database is an RDS for SQL Server instance, the database owner must be `sqlsa`.

    • If the source database is Enterprise Edition, it must be SQL Server 2008 or later.

    • If the source database is Standard Edition, it must be SQL Server 2016 SP1 or later.

    • If the source database is SQL Server 2017 (Standard or Enterprise Edition), upgrade the database version.

  • DTS uses the fn_log function to obtain source database logs. This function has performance bottlenecks. Do not clear the source database logs too early. Otherwise, the DTS task may fail.

  • Source database operation limits:

    • During initial schema synchronization and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination databases. To ensure real-time data consistency, select Initial Schema Synchronization, Full Data Migration, and Incremental Data Migration.

  • If the source database is a read-only instance, DDL operations cannot be migrated.

  • If the source database is an Azure SQL Database, a DTS instance can migrate only one database.

  • If the source database is an RDS for SQL Server instance and the migration task includes incremental migration, disable transparent data encryption (TDE) to ensure the stability of the DTS instance. For more information, see Disable TDE.

  • If you use the sp_rename command to rename objects such as stored procedures in the source database before the initial schema synchronization task runs, the task may not work as expected or may fail.

    Note

    Use the ALTER command to rename objects in the database.

  • In hybrid log parsing mode, you cannot consecutively perform multiple operations to add or remove columns in the source database within an interval of less than 10 minutes. For example, if you run the following SQL statements consecutively, the task reports an error.

    ALTER TABLE test_table DROP COLUMN Flag;
    ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');
  • If the source database is a Web Edition RDS for SQL Server instance, you must set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) when you configure the task.

  • During full data migration, make sure that the READ_COMMITTED_SNAPSHOT transaction processing mode parameter is enabled for the source database. This prevents shared locks from affecting data writes. Otherwise, exceptions such as data inconsistency and instance failures may occur. Exceptions caused by this issue are not covered by the DTS SLA.

Other limits

  • Only data of basic data types can be migrated. Data of the CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY, and user-defined data types created using the CREATE TYPE command cannot be migrated.

  • Objects of the following types cannot be migrated: INDEX, VIEW, PROCEDURE, FUNCTION, TRIGGER, FK, INDEX, FULL_TEXT_INDEX, DATATYPE, DEFAULT, SYNONYM, CATALOG, PLAN_GUIDE, DEFAULT_CONSTRAINT, UK, CK, and SEQUENCE.

  • You can select tables to migrate. You can also modify column mappings. If you use column mapping for a non-full table migration or if the source and destination table schemas are inconsistent, data in the columns that exist in the source table but not in the destination table is lost.

  • Append-optimized (AO) tables are not supported as destination tables.

  • If a table to migrate has a primary key, the primary key column in the destination table must be the same as in the source table. If a table to migrate does not have a primary key, the primary key column in the destination table must be the same as the distribution key.

  • The unique key, including the primary key column, of the destination table must contain all columns of the distribution key.

  • If you select Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) for SQL Server Incremental Synchronization Mode in the Configure Objects stage, the tables to be migrated must have a clustered index that contains primary key columns. The tables to be migrated cannot be heap tables, tables without primary keys, compressed tables, tables with computed columns, or tables with sparse columns. In mixed log parsing mode, these restrictions do not apply.

  • If you set SQL Server Incremental Synchronization Mode to Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing) in the Configure Objects step, the following limits also apply:

    • Incremental migration by DTS depends on the CDC component. Make sure that the CDC job in the source database is running. Otherwise, the DTS task fails.

    • By default, the incremental data stored in the CDC component is retained for 3 days. We recommend that you use the exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>; command to adjust the retention period.

      Note
      • <time> specifies the time in minutes.

      • If the number of incremental change SQL statements for a single table in the source database exceeds 10 million per day, we recommend that you set <time> to 1440.

    • In a single migration task, we recommend that you enable CDC for no more than 1,000 tables. Otherwise, task latency or instability may occur.

    • The prerequisite module of an incremental migration task enables CDC for the source database. During this process, the source database may be briefly locked due to the limits of the SQL Server database kernel.

  • If you set SQL Server Incremental Synchronization Mode to Polling and querying CDC instances for incremental synchronization in the Configure Objects step, the following limits also apply:

    • The source database account used by the DTS instance must have the permissions to enable CDC. To enable database-level CDC, you need an account with the sysadmin role. To enable table-level CDC, you need a privileged account.

      Note
      • The privileged account (server administrator) provided by the Azure SQL Database console meets the requirements. For vCore-based databases, all instance types support CDC. For DTU-based databases, only instance types of S3 and later support CDC.

      • The privileged account of Amazon RDS for SQL Server meets the requirements and can be used to enable database-level CDC for stored procedures.

      • Clustered columnstore index tables do not support CDC.

      • The prerequisite module of an incremental migration task enables CDC for the source database. During this process, the source database may be briefly locked due to the limits of the SQL Server database kernel.

    • DTS polls the CDC instance of each table in the source database to obtain incremental data. Therefore, we recommend that you migrate no more than 1,000 tables from the source database. Otherwise, task latency or instability may occur.

    • By default, the incremental data stored in the CDC component is retained for 3 days. We recommend that you use the exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>; command to adjust the retention period.

      Note
      • <time> specifies the time in minutes.

      • If the number of incremental change SQL statements for a single table in the source database exceeds 10 million per day, we recommend that you set <time> to 1440.

    • You cannot consecutively perform operations to add or remove columns. For example, you cannot perform more than two DDL operations to add or remove columns within one minute. Otherwise, the task may fail.

    • You cannot change the CDC instance of the source database. Otherwise, the task may fail or data may be lost.

  • To ensure the accuracy of incremental data migration latency, DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table in the source database in log parsing mode. In hybrid incremental synchronization mode, DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table, and enables database-level CDC and CDC for some tables. We recommend that the data change rate of tables with CDC enabled in the source database does not exceed 1,000 records per second (RPS).

  • Before you migrate data, evaluate the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. Otherwise, DTS consumes read and write resources on the source and destination databases during full data migration, which may increase the database load.

  • Full data migration involves concurrent INSERT operations, which cause table fragmentation in the destination database. Therefore, after full data migration is complete, the table storage space in the destination database is larger than that in the source instance.

  • Confirm whether the migration precision that DTS provides for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not specify the precision, DTS migrates FLOAT values with a precision of 38 and DOUBLE values with a precision of 308.

  • DTS attempts to resume a failed migration task within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task, or use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance after the task is automatically resumed.

  • If a migration task includes incremental data migration, you cannot reindex. Otherwise, the task may fail or data may be lost.

    Note

    You cannot change the primary keys of tables for which CDC is enabled.

  • If the number of tables for which CDC is enabled in a single migration task is greater than the value of The maximum number of tables for which CDC is enabled that DTS supports., the precheck fails.

  • If a task includes incremental migration and the data to be written to a single field of a table with CDC enabled exceeds 64 KB, you must run the exec sp_configure 'max text repl size', -1; command to adjust the configuration of the source database in advance.

    Note

    By default, a CDC job can process a single field with a maximum length of 64 KB.

  • If multiple DTS instances use the same SQL Server database as the source, their incremental data ingestion modules are independent of each other.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • SQL Server is a commercial closed-source database. Due to known or unknown format-specific limits, issues may occur when DTS performs CDC and parsing on SQL Server logs. Therefore, before you enable incremental synchronization or migration for a SQL Server source in a production environment, we recommend that you perform a comprehensive proof of concept (POC) test. The test must cover all business change types, table schema changes, and business peak-hour stress tests. Due to the unpredictable nature of the SQL Server log format, you must ensure that the business logic in the production environment is consistent with that in the POC test. This is key to ensuring the high efficiency and stability of DTS.

Special cases

If the source instance is an RDS for SQL Server instance, DTS creates an rdsdt_dtsacct account in the source instance for data migration. Do not delete this account or change its password while the task is running. Otherwise, the task may fail. For more information, see System accounts.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

When the Access Method parameter of the destination database is set to Public IP Address, you are charged for Internet traffic. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration types

  • Schema migration

    DTS migrates the schema definitions of the migration objects from the source database to the destination database.

    • Supported schema objects: Schema, Table, View, Function, Procedure.

    • Unsupported schema objects: Assemblies, Service Broker, Full-text Index, Full-text Catalog, Distributed Schema, Distributed Function, CLR Stored Procedure, CLR Scalar Function, CLR Table-valued Function, Internal Table, System, Aggregate Function.

    Warning

    This is a heterogeneous database migration. Data types are not mapped one-to-one. Carefully assess the impact of data type mapping on your business. For details, see Data type mapping between heterogeneous databases.

  • Full migration

    DTS migrates all historical data of the specified migration objects from the source database to the destination database.

  • Incremental migration

    After a full migration is complete, DTS migrates incremental data updates from the source database to the destination database. Incremental migration lets you smoothly migrate data without interrupting your self-managed applications.

SQL operations supported for incremental migration

Operation type

SQL statement

DML

INSERT, UPDATE, DELETE

Note
  • If an UPDATE operation updates only the large fields, DTS does not migrate the operation.

  • When data is written to the destination AnalyticDB for PostgreSQL instance, the UPDATE statement is automatically converted to a REPLACE INTO statement. If the primary key is updated, the statement is converted to a DELETE and an INSERT statement.

DDL

  • CREATE TABLE

  • ALTER TABLE

    Only ADD COLUMN and DROP COLUMN are supported.

  • DROP TABLE

  • CREATE INDEX, DROP INDEX

Note
  • DDL operations with custom data types are not supported.

  • Transactional DDL operations are not supported. For example, adding multiple columns in one statement or mixing DDL and DML in one statement may cause data loss.

  • Online DDL operations are not supported.

  • DDL operations using reserved keywords as property names are not supported.

  • DDL operations executed by system stored procedures are not supported.

  • TRUNCATE TABLE operations are not supported.

  • Partitions or table definitions containing functions are not supported.

Database account permissions

Database

Schema migration

Full migration

Incremental migration

Self-managed SQL Server database

SELECT permission

SELECT permission

sysadmin

AnalyticDB for PostgreSQL instance

  • LOGIN permission.

  • SELECT, CREATE, INSERT, UPDATE, and DELETE permissions on destination tables.

  • CONNECT and CREATE permissions on the destination database.

  • CREATE permission on the destination schema.

  • COPY permission (for memory-based batch copy).

Note

You can also use the initial account of AnalyticDB for PostgreSQL.

To create and grant database account permissions:

Preparations

Note

To perform incremental migration, configure transaction log settings and create a clustered index on the self-managed SQL Server database before you configure the data migration task.

Important

To migrate multiple databases, repeat steps 1 to 3 in this section. Otherwise, data inconsistency may occur.

  1. In the self-managed SQL Server database, run the following command to set the recovery model of the database to be migrated to full recovery mode:

    use master;
    GO
    ALTER DATABASE <database_name> SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
    GO

    Parameters:

    <database_name>: The name of the database to be migrated.

    Example:

    use master;
    GO
    ALTER DATABASE mytestdata SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
    GO
  2. Run the following command to perform a logical backup of the database to be migrated. Skip this step if a logical backup has already been performed.

    BACKUP DATABASE <database_name> TO DISK='<physical_backup_device_name>';
    GO

    Parameters:

    • <database_name>: The name of the database to be migrated.

    • <physical_backup_device_name>: The path and file name of the backup file.

    Example:

    BACKUP DATABASE mytestdata TO DISK='D:\backup\dbdata.bak';
    GO
  3. Run the following command to back up the transaction log of the database to be migrated.

    BACKUP LOG <database_name> to DISK='<physical_backup_device_name>' WITH init;
    GO

    Parameters:

    • <database_name>: The name of the database to be migrated.

    • <physical_backup_device_name>: The path and file name of the backup file.

    Example:

    BACKUP LOG mytestdata TO DISK='D:\backup\dblog.bak' WITH init;
    GO

Procedure

  1. Navigate to the migration task list page for the destination region using one of the following methods.

    From the DTS console

    1. Log on to the Data Transmission Service (DTS) console.

    2. In the navigation pane on the left, click Data Migration.

    3. In the upper-left corner of the page, select the region where the migration instance is located.

    From the DMS console

    Note

    The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the Data Management (DMS) console.

    2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

    3. To the right of Data Migration Tasks, select the region where the migration instance is located.

  2. Click Create Task to navigate to the task configuration page.

  3. Configure the source and destination databases.

    Warning

    After you select the source and destination instances, we recommend that you carefully read the limits displayed at the top of the page. Otherwise, the task may fail or data inconsistency may occur.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source database information

    Database Type

    Select SQL Server.

    Connection Type

    Select Self-managed Database on ECS.

    Note

    When you select self-managed database, complete the corresponding preparations. For details, see Preparations overview.

    Instance Region

    Select the region where your self-managed SQL Server database resides.

    ECS instance ID

    Enter the ECS instance ID of your self-managed SQL Server database.

    Port

    Enter the service port of your self-managed SQL Server database. Default is 1433.

    Database Account

    Enter the database account for your self-managed SQL Server database. For permission requirements, see Database account permissions.

    Database Password

    Enter the password for the database account.

    Encryption

    Specifies whether to encrypt the connection to the source database. Select Non-encrypted or SSL-encrypted based on your business requirements.

    • If SSL encryption is disabled for the source database, select Non-encrypted.

    • If SSL encryption is enabled for the source database, select SSL-encrypted. By default, DTS trusts the server certificate.

    Destination database information

    Database Type

    Select AnalyticDB PostgreSQL.

    Connection Type

    Select Cloud instance.

    Instance Region

    Select the region where your destination AnalyticDB PostgreSQL instance resides.

    Instance ID

    Select the instance ID of your destination AnalyticDB PostgreSQL instance.

    Database name

    Enter the name of the database in your destination AnalyticDB PostgreSQL instance that contains the objects to migrate.

    Database Account

    Enter the database account for your destination AnalyticDB PostgreSQL instance. For permission requirements, see Database account permissions.

    Database Password

    Enter the password for the database account.

  4. After you complete the configuration, click Test Connectivity and Proceed at the bottom of the page. In the CIDR Blocks of DTS Servers dialog box that appears, click Test Connectivity.

    Note

    Ensure that the IP address segments of the DTS service are automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For more information, see Add DTS server IP addresses to a whitelist.

  5. Configure the task objects.

    1. On the Configure Objects page, configure the objects that you want to migrate.

      Configuration

      Description

      Migration Types

      • If you only need to perform a full migration, select both Schema Migration and Full Data Migration.

      • To perform a migration with no downtime, select Schema Migration, Full Data Migration, and Incremental Data Migration.

      Note
      • If you do not select Schema Migration, you must ensure that a database and tables to receive the data exist in the destination database. You can also use the object name mapping feature in the Selected Objects box as needed.

      • If you do not select Incremental Data Migration, do not write new data to the source instance during data migration to ensure data consistency.

      Processing Mode for Existing Destination Tables

      • Precheck and Report Errors: Checks whether tables with the same names exist in the destination database. If no tables with the same names exist, the precheck is passed. If tables with the same names exist, an error is reported during the precheck, and the data migration task does not start.

        Note

        If a table in the destination database has the same name but cannot be easily deleted or renamed, you can change the name of the table in the destination database. For more information, see Object name mapping.

      • Ignore Errors and Proceed: Skips the check for tables with the same names.

        Warning

        Selecting Ignore Errors and Proceed may cause data inconsistency and business risks. For example:

        • If the table schemas are consistent and a record in the destination database has the same primary key value as a record in the source database:

          • During full migration, DTS keeps the record in the destination database. The record from the source database is not migrated.

          • During incremental migration, DTS does not keep the record in the destination database. The record from the source database overwrites the record in the destination database.

        • If the table schemas are inconsistent, only some columns of data may be migrated, or the migration may fail. Proceed with caution.

      SQL Server Incremental Synchronization Mode

      • Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing):

        • Advantages:

          • Supports scenarios that involve source heap tables, tables without primary keys, compressed tables, or tables with computed columns.

          • Provides high link stability. This mode can obtain complete DDL statements and supports a wide range of DDL scenarios.

        • Disadvantages:

          • DTS creates the `dts_cdc_sync_ddl` trigger, the `dts_sync_progress` heartbeat table, and the `dts_cdc_ddl_history` DDL storage table in the source database. It also enables database-level CDC and CDC for some tables.

          • You cannot execute `SELECT INTO`, `TRUNCATE`, or `RENAME COLUMN` statements on tables with CDC enabled in the source database. You cannot manually delete triggers created by DTS in the source database.

      • Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported):

        • Advantage:

          This mode is non-intrusive to the source database.

        • Disadvantage:

          This mode does not support scenarios that involve source heap tables, tables without primary keys, compressed tables, or tables with computed columns.

      • Polling and querying CDC instances for incremental synchronization:

        • Advantages:

          • Supports full and incremental migration when the source database is Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, or Google Cloud SQL for SQL Server.

          • This mode uses the native CDC component of SQL Server to obtain incremental data, which improves the stability of incremental migration and reduces network bandwidth usage.

        • Disadvantages:

          • The source database account used by the DTS instance must have the permissions to enable CDC. Incremental data migration has a latency of about 10 seconds.

          • When you migrate multiple tables across multiple databases, you may encounter stability and performance issues.

      Note

      This setting appears only when Migration Types includes Incremental Data Migration.

      The maximum number of tables for which CDC is enabled that DTS supports.

      We recommend that you, based on your business requirements, set the maximum number of tables for which CDC is enabled that a DTS task supports. Default value: 1,000.

      Note

      This parameter is unavailable if you set the SQL Server Incremental Synchronization Mode parameter to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported).

      Select DDL and DML to Sync at the Instance Level

      Select SQL operations for incremental migration at the instance level. Supported operations are listed in SQL operations supported for incremental migration.

      Note

      To select SQL operations for incremental migration at the database or table level, right-click a migration object in the Selected Objects box and select the desired SQL operations.

      Storage Engine Type

      Select the storage engine type for the destination table as needed. The default value is Beam.

      Note

      This configuration item is available only when the kernel version of the destination AnalyticDB for PostgreSQL instance is v7.0.6.6 or later and you select Schema Migration for Migration Types.

      Source Objects

      In the Source Objects box, click the objects to migrate, and then click Right arrow to move them to the Selected Objects box.

      Note

      This scenario is a migration between heterogeneous databases. Therefore, the granularity for selecting migration objects is table. Other objects such as views, triggers, and stored procedures are not migrated to the destination database.

      Selected Objects

      • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Individual table column mapping.

      • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.

      Note
      • Using object name mapping may cause dependent objects to fail migration.

      • To filter data using a WHERE clause, right-click the table to migrate in Selected objects and set the filter condition in the dialog box. For instructions, see Set filter conditions.

      • To select SQL operations at the database or table level, right-click the migration object in Selected objects and select the required operations in the dialog box.

    2. Click Next: Advanced Settings to configure advanced parameters.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS schedules tasks on a shared cluster. You do not need to select one. If you want more stable tasks, you can purchase a dedicated cluster to run DTS migration tasks.

      Retry Time for Failed Connections

      After the migration task starts, if the connection to the source or destination database fails, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can customize the retry time to a value from 10 to 1440 minutes. We recommend that you set the duration to more than 30 minutes. If DTS reconnects to the source and destination databases within the specified duration, the migration task automatically resumes. Otherwise, the task fails.

      Note
      • For multiple DTS instances that share the same source or destination, the network retry time is determined by the setting of the last created task.

      • Because you are charged for the task during the connection retry period, we recommend that you customize the retry time based on your business needs, or release the DTS instance as soon as possible after the source and destination database instances are released.

      Retry Time for Other Issues

      After the migration task starts, if a non-connectivity issue, such as a DDL or DML execution exception, occurs in the source or destination database, DTS reports an error and immediately begins to retry the operation. The default retry duration is 10 minutes. You can customize the retry time to a value from 1 to 1440 minutes. We recommend that you set the duration to more than 10 minutes. If the related operations succeed within the specified retry duration, the migration task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than the value of Retry Time for Failed Connections.

      Enable Throttling for Full Data Migration

      During full migration, DTS consumes read and write resources on the source and destination databases, which may increase the database load. If required, you can enable throttling for the full migration task. You can set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce the load on the destination database.

      Note
      • This configuration item is available only if you select Full Data Migration for Migration Types.

      • You can also adjust the full migration speed after the migration instance is running.

      Enable Throttling for Incremental Data Migration

      If required, you can also choose to set speed limits for the incremental migration task. You can set RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) to reduce the load on the destination database.

      Note
      • This configuration item is available only if you select Incremental Data Migration for Migration Types.

      • You can also adjust the incremental migration speed after the migration instance is running.

      Environment Tag

      Select an environment label to identify the instance. Not required for this example.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Select whether to set alerts and receive alert notifications based on your business needs.

      • No: Does not set an alert.

      • Yes: Configure alerts by setting an alert threshold and an alert notifications. If a migration fails or the latency exceeds the threshold, the system sends an alert notification.

    3. Click Next: Data Validation to configure a data validation task.

      For more information about the data validation feature, see Configure data validation.

    4. Optional: After completing the above configurations, click Next: Configure Database and Table Fields to set the Type, Primary Key Column, and Distribution Key for tables migrating to the destination AnalyticDB for PostgreSQL.

      Note
      • This step appears only when Migration Types includes Schema Migration. Select Definition Status in All to modify settings.

      • The Primary Key Column can be a composite key made of multiple columns. Select one or more columns from Primary Key Column as the Distribution Key. For more information, see Manage data tables and Define table distribution.

  6. Save the task and run a precheck.

    • To view the parameters for configuring this instance when you call the API operation, move the pointer over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the bubble that appears.

    • If you do not need to view or have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before the migration task starts, DTS performs a precheck. The task starts only after it passes the precheck.

    • If the precheck fails, click View Details next to the failed check item, fix the issue based on the prompt, and then run the precheck again.

    • If a warning is reported during the precheck:

      • For check items that cannot be ignored, click View Details next to the failed item, fix the issue based on the prompt, and then run the precheck again.

      • For check items that can be ignored, you can click Confirm Alert Details, Ignore, OK, and Precheck Again to skip the alert item and run the precheck again. If you choose to ignore a warning, it may cause issues such as data inconsistency and pose risks to your business.

  7. Purchase the instance.

    1. When the Success Rate is 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the link specification for the data migration instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Resource Group Settings

      Select the resource group to which the instance belongs. The default value is default resource group. For more information, see What is Resource Management?

      Instance Class

      DTS provides migration specifications with different performance levels. The link specification affects the migration speed. You can select a specification based on your business scenario. For more information, see Data migration link specifications.

    3. After the configuration is complete, read and select Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start. In the OK dialog box that appears, click OK.

      You can view the progress of the migration task on the Data Migration Tasks list page.

      Note
      • If the migration task does not include incremental migration, it stops automatically after the full migration is complete. After the task stops, its Status changes to Completed.

      • If the migration task includes incremental migration, it does not stop automatically. The incremental migration task continues to run. While the incremental migration task is running, the Status of the task is Running.