All Products
Search
Document Center

Data Transmission Service:Synchronize data between RDS for SQL Server instances

Last Updated:Feb 07, 2026

Use Data Transmission Service (DTS) to synchronize data between RDS for SQL Server instances.

Prerequisites

  • Create the source and destination RDS for SQL Server instances. For information about supported versions, see Synchronization solutions overview. For information about how to create an instance, see Quickly create and use an RDS for SQL Server instance.

    Important

    In hybrid log parsing mode, where SQL Server Incremental Synchronization Mode is set to Use log parsing for incremental synchronization of non-heap tables and CDC for incremental synchronization of heap tables, the following source database versions are supported:

    • Enterprise or Enterprise Evaluation Edition: Versions 2012, 2014, 2016, 2019, or 2022.

    • Standard Edition: Versions 2016, 2019, or 2022.

  • The storage space of the destination RDS for SQL Server instance must be larger than that of the source RDS for SQL Server instance.

  • If you synchronize data from a self-managed SQL Server database to an RDS for SQL Server instance and any of the following conditions apply, use the backup feature of RDS for SQL Server to synchronize the data. For more information, see Migrate data from a self-managed database to an ApsaraDB RDS instance.

    • The number of databases exceeds 10.

    • Log backups are performed on a single database more than once per hour.

    • DDL operations are performed on a single database more than 100 times per hour.

    • The log volume of a single database exceeds 20 MB/s.

    • Change Data Capture (CDC) needs to be enabled for more than 1,000 tables.

    • The source database logs contain heap tables, tables without primary keys, compressed tables, or tables with computed columns. You can run the following SQL statements to check whether these types of tables exist in the source database:

      1. Check for heap tables in the source database:

        SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.indexes WHERE index_id = 0);
      2. Check for tables without primary keys:

        SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id NOT IN (SELECT parent_object_id FROM sys.objects WHERE type = 'PK');
      3. Check for primary key columns that are not included in the clustered index columns of the source database:

        SELECT s.name schema_name, t.name table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id WHERE t.type = 'U' AND s.name NOT IN('cdc', 'sys') AND t.name NOT IN('systranschemas') AND t.object_id IN ( SELECT pk_colums_counter.object_id AS object_id FROM (select pk_colums.object_id, sum(pk_colums.column_id) column_id_counter from (select sic.object_id object_id, sic.column_id FROM sys.index_columns sic, sys.indexes sis WHERE sic.object_id = sis.object_id AND sic.index_id = sis.index_id AND sis.is_primary_key = 'true') pk_colums group by object_id) pk_colums_counter inner JOIN ( select cluster_colums.object_id, sum(cluster_colums.column_id) column_id_counter from (SELECT sic.object_id object_id, sic.column_id FROM sys.index_columns sic, sys.indexes sis WHERE sic.object_id = sis.object_id AND sic.index_id = sis.index_id AND sis.index_id = 1) cluster_colums group by object_id ) cluster_colums_counter ON pk_colums_counter.object_id = cluster_colums_counter.object_id and pk_colums_counter.column_id_counter != cluster_colums_counter.column_id_counter);
      4. Check for compressed tables in the source database:

        SELECT s.name AS schema_name, t.name AS table_name FROM sys.objects t, sys.schemas s, sys.partitions p WHERE s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id = p.object_id AND p.data_compression != 0;
      5. Check for tables that contain computed columns:

        SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.columns WHERE is_computed = 1);
      6. Check for tables that contain sparse columns:

        SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.columns WHERE is_sparse = 1);

Precautions

Note

DTS does not synchronize foreign keys from the source database. Therefore, cascade operations on the source database are not synchronized to the destination database.

Type

Description

Source database limitations

  • Tables to be synchronized must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

  • If you synchronize data at the table level, need to edit objects such as mapping column names, and the number of tables in a single task exceeds 5,000, split the tables into multiple tasks. You can also configure a task to synchronize the entire database. Otherwise, an error may be reported after you submit the task.

  • A single sync task supports a maximum of 10 databases. If you exceed this limit, you risk stability and performance issues. In this case, split the tables and configure them in multiple tasks.

  • Memory-optimized tables cannot be synchronized.

  • When you configure a task to synchronize specific objects to the same destination database, you cannot select objects that have the same table name but different schema names.

  • DTS uses the `fn_log` function to get logs from the source database. This function has performance bottlenecks. Do not clear the source database logs too early, or the task may fail.

  • Data logs:

    • Data logs must be enabled. The backup mode must be set to Full, and a full physical backup must have been successfully performed.

    • For incremental synchronization tasks, DTS requires the source database to retain data logs for more than 24 hours. For tasks that include both full and incremental synchronization, DTS requires the source database to retain data logs for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is too short, the DTS task may fail because it cannot get the data logs. In extreme cases, this can cause data inconsistency or data loss. Issues caused by setting a log retention period shorter than required by DTS are not covered by the DTS Service-Level Agreement (SLA).

  • If Change Data Capture (CDC) needs to be enabled for tables in the source database, the following conditions must be met. Otherwise, the precheck fails.

    • The value of the `srvname` field in the `sys.sysservers` view must be the same as the return value of the `SERVERPROPERTY` function.

    • If the source database is a self-managed SQL Server instance, the database owner must be `sa`. If the source database is an RDS for SQL Server instance, the database owner must be `sqlsa`.

    • If the source database is Enterprise Edition, it must be SQL Server 2008 or later.

    • If the source database is Standard Edition, it must be SQL Server 2016 SP1 or later.

    • If the source database is SQL Server 2017 (Standard or Enterprise Edition), upgrade the version.

  • If the source instance is a read-only instance, DDL operations cannot be synchronized.

  • If the source database is an Azure SQL Database, a single sync instance can synchronize only one database.

  • If the source database is an RDS for SQL Server instance, ensure that the Transparent Data Encryption (TDE) feature is disabled to ensure the stability of the sync instance. For more information, see Disable TDE.

  • If you use the sp_rename command to modify the names of objects, such as stored procedures, in the source database before a schema synchronization task runs, the task may produce unexpected results or fail.

    Note

    We recommend using the ALTER command to rename database objects.

  • In hybrid log parsing mode, you cannot consecutively run multiple operations to add or drop columns in the source database within a 10-minute interval. For example, running the following SQL statements consecutively causes the task to report an error.

    ALTER TABLE test_table DROP COLUMN Flag;
    ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');
  • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

    Note

    During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

  • If the source database is a web-based RDS SQL Server, you must set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) when you configure the task.

  • We recommend that you keep the READ_COMMITTED_SNAPSHOT transaction processing mode parameter of the source database enabled while a full data sync task is running. This prevents shared locks from affecting data writes. Otherwise, issues such as data inconsistency or instance failures may occur. Such issues are not covered by the DTS Service-Level Agreement (SLA).

Other limitations

  • Data of the CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, and GEOGRAPHY data types cannot be synchronized.

  • If data cannot be written to a field of the TIMESTAMP type in the destination, DTS does not support full or incremental synchronization. This may cause data inconsistency or task failure.

  • If you synchronize data across different versions, confirm compatibility in advance.

  • To synchronize triggers from the source database, ensure that the database account used for the task has Owner permissions on the destination database.

  • If you set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) in the Configure Objects stage, the tables to be synchronized must have a clustered index that contains the primary key column. Synchronization of heap tables, tables without a primary key, compressed tables, tables with computed columns, or tables with sparse columns is not supported. These restrictions do not apply in the hybrid log parsing mode.

  • In the Configure Objects stage, if you set SQL Server Incremental Synchronization Mode to Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing), the following limitations also apply:

    • The incremental synchronization of DTS depends on the CDC component. Ensure that the CDC job in the source database is running correctly. Otherwise, the DTS task will fail.

    • By default, the incremental data stored by the CDC component is retained for 3 days. Adjust the retention period as needed using the exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>; command.

      Note
      • <time> specifies the time in minutes.

      • If the average number of daily incremental change SQL statements for a single table in the source database exceeds 10 million, set <time> to 1440.

    • The prerequisite module for a DTS incremental synchronization task enables CDC at the database and table levels in the source database. During this process, the source database may be briefly locked due to limitations of the SQL Server database kernel.

    • In a single sync task, do not enable CDC for more than 1,000 tables. Otherwise, the task may experience latency or become unstable.

  • If you set SQL Server Incremental Synchronization Mode to Polling and querying CDC instances for incremental synchronization in the Configure Objects stage, the following limitations also apply:

    • The source database account used by the DTS instance must have the permission to enable CDC. Enabling database-level CDC requires an account with the sysadmin role permission, and enabling table-level CDC requires a privileged account.

      Note
      • The privileged account (server administrator) provided in the Azure SQL Database console meets the requirements. For databases that use the vCore-based purchasing model, all specifications support enabling CDC. For databases that use the DTU-based purchasing model, the specification must be S3 or higher to support enabling CDC.

      • The privileged account for Amazon RDS for SQL Server meets the requirements and supports enabling database-level CDC for stored procedures.

      • CDC cannot be enabled for tables with clustered columnstore indexes.

      • The prerequisite module for a DTS incremental synchronization task enables CDC at the database and table levels in the source database. During this process, the source database may be briefly locked due to limitations of the SQL Server database kernel.

    • DTS polls the CDC instance of each table in the source database to get incremental data. Therefore, do not synchronize more than 1,000 tables from the source database. Otherwise, the task may experience latency or become unstable.

    • By default, the incremental data stored by the CDC component is retained for 3 days. Adjust the retention period as needed using the exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>; command.

    • Note
      • <time> specifies the time in minutes.

      • If the average number of daily incremental change SQL statements for a single table in the source database exceeds 10 million, set <time> to 1440.

    • Running add or drop column operations consecutively (more than two add or drop DDL operations within one minute) is not supported. Otherwise, the task may fail.

    • Do not modify the CDC instance in the source database. Otherwise, the task may fail or data may be lost.

  • To ensure accurate latency for incremental data synchronization, DTS performs the following actions: In the "parse source logs for incremental synchronization" mode, DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table in the source database. In hybrid incremental synchronization mode, DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table, and also enables database-level CDC and CDC for some tables. The data change volume for tables with CDC enabled in the source database should not exceed 1,000 records per second (RPS).

  • Evaluate the performance of the source and destination databases before you synchronize data. Synchronize data during off-peak hours. Otherwise, the initial full data synchronization consumes read and write resources on both databases, which may increase the database load.

  • Initial full synchronization runs concurrent INSERT operations, which causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full synchronization is complete.

  • During DTS synchronization, do not write data to the destination database from any source other than DTS. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform an online DDL operation while data is being written to the destination database from another source, data loss may occur in the destination database.

  • Reindexing is not supported for a sync instance. This operation can cause the task to fail or even lead to data loss.

    Note

    Changes related to the primary key are not supported for tables with CDC enabled.

  • If the number of tables with CDC enabled in a single sync task is greater than the value set for The maximum number of tables for which CDC is enabled that DTS supports., the precheck will fail.

  • If a single field in a table with CDC enabled needs to store more than 64 KB of data, you must run the exec sp_configure 'max text repl size', -1; command in advance to adjust the configuration of the source database.

    Note

    By default, a CDC job can process a maximum of 64 KB for a single field.

  • For incremental synchronization, disable any enabled triggers and foreign keys in the destination database. Otherwise, the sync task fails.

  • To use the feature to modify synchronized objects, you cannot remove a database.

  • If multiple sync instances use the same SQL Server database as the source, their incremental data ingestion modules are independent of each other.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • SQL Server is a commercial, closed-source database. Its log format has characteristics that can cause unavoidable issues when DTS performs incremental CDC and parsing. Before you use DTS for incremental or migration synchronization from a SQL Server source in a production environment, perform a comprehensive proof of concept (POC). Your POC should cover all business change types, table schema adjustments, and peak-hour stress tests. The SQL Server log format can be unpredictable. To ensure that DTS runs efficiently and stably, make sure your production business logic is consistent with what you tested in the POC.

Special cases

If the source instance is an RDS for SQL Server instance, DTS creates a rdsdt_dtsacct account in the source instance for data synchronization. Do not delete this account or change its password while the task is running. Otherwise, the task may fail. For more information, see System accounts.

Billing

Synchronization type

Pricing

Schema synchronization and full data synchronization

Free of charge.

Incremental data synchronization

Charged. For more information, see Billing overview.

Supported synchronization topologies

  • One-way one-to-one synchronization

  • One-way one-to-many synchronization

  • One-way cascade synchronization

  • One-way many-to-one synchronization

For an introduction and notes on each synchronization topology, see Data synchronization topologies.

Supported SQL operations

Operation type

SQL operation

DML

INSERT, UPDATE, DELETE

Note

UPDATE statements that only update large objects are not supported.

DDL

  • CREATE TABLE

  • ALTER TABLE

    Only includes ADD COLUMN and DROP COLUMN

  • DROP TABLE

  • CREATE INDEX, DROP INDEX

Note
  • Transactional DDL operations are not supported. For example, a single SQL statement that adds multiple columns or a single SQL statement that includes both DDL and DML operations may cause data loss.

  • DDL operations that include custom types are not supported.

  • Online DDL operations are not supported.

  • DDL operations that use reserved keywords as attribute names are not supported.

  • DDL operations executed by system stored procedures are not supported.

  • TRUNCATE TABLE operations are not supported.

  • Partitions and table definitions that contain functions are not supported.

Procedure

  1. Go to the data synchronization task list page in the destination region. You can do this in one of two ways.

    DTS console

    1. Log on to the DTS console.

    2. In the navigation pane on the left, click Data Synchronization.

    3. In the upper-left corner of the page, select the region where the synchronization instance is located.

    DMS console

    Note

    The actual steps may vary depending on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the DMS console.

    2. In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.

    3. To the right of Data Synchronization Tasks, select the region of the synchronization instance.

  2. Click Create Task to open the task configuration page.

  3. Configure the source and destination databases.

    Warning

    After you select the source and destination instances, review the Limits at the top of the page. Otherwise, the task may fail or data inconsistency may occur.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source Database

    Select Existing Connection

    • Select the registered database instance with DTS from the drop-down list. The database information below is automatically configured.

      Note

      In the DMS console, this configuration item is Select a DMS database instance.

    • If you have not registered the database instance or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select SQL Server.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the source RDS for SQL Server instance resides.

    Instance ID

    Select the ID of the source RDS for SQL Server instance.

    Database Account

    Enter the database account of the source RDS for SQL Server instance. The account must have owner permission on the objects to be migrated. An account with administrative permission meets this requirement.

    Database Password

    Enter the password for the specified database account.

    Encryption

    Select Non-encrypted or SSL-encrypted as needed.

    • If SSL encryption is not enabled for the source database, select Non-encrypted.

    • If SSL encryption is enabled for the source database, select SSL-encrypted. DTS trusts the server-side certificate by default.

    Destination Database

    Select Existing Connection

    • Select the registered database instance with DTS from the drop-down list. The database information below is automatically configured.

      Note

      In the DMS console, this configuration item is Select a DMS database instance.

    • If you have not registered the database instance or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select SQL Server.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the destination RDS for SQL Server instance resides.

    Instance ID

    Select the ID of the destination RDS for SQL Server instance.

    Database Account

    Enter the database account of the destination RDS for SQL Server instance. The account must have owner permission on the objects to be migrated.

    Database Password

    Enter the password for the specified database account.

    Encryption

    Select Non-encrypted or SSL-encrypted as needed.

    • If SSL encryption is not enabled for the destination database, select Non-encrypted.

    • If SSL encryption is enabled for the destination database, select SSL-encrypted. DTS trusts the server-side certificate by default.

  4. After completing the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that you add the CIDR blocks of the DTS servers (either automatically or manually) to the security settings of both the source and destination databases to allow access. For more information, see Add the IP address whitelist of DTS servers.

    • If the source or destination is a self-managed database (i.e., the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

  5. Configure the task objects.

    1. On the Configure Objects page, specify the objects to synchronize.

      Configuration

      Description

      Synchronization Types

      DTS always selects Incremental Data Synchronization. By default, you must also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS initializes the destination cluster with the full data of the selected source objects, which serves as the baseline for subsequent incremental synchronization.

      Method to Migrate Triggers in Source Database

      Select a method to synchronize triggers as needed. If the objects to be synchronized do not involve triggers, you do not need to configure this item. For more information, see Configure a method to synchronize or migrate triggers.

      Note

      This item is available only when Schema Synchronization is selected for Synchronization Types.

      SQL Server Incremental Synchronization Mode

      • Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing):

        • Advantages:

          • Supports scenarios with source database heap tables, tables without primary keys, compressed tables, and tables with computed columns.

          • High link stability. This mode can obtain complete DDL statements and supports a wide range of DDL scenarios.

        • Disadvantages:

          • DTS creates the trigger `dts_cdc_sync_ddl`, the heartbeat table `dts_sync_progress`, and the DDL storage table `dts_cdc_ddl_history` in the source database. It also enables database-level Change Data Capture (CDC) and partial table CDC.

          • You cannot execute SELECT INTO, TRUNCATE, and RENAME COLUMN statements on tables for which CDC is enabled in the source database. Triggers created by DTS in the source database cannot be manually deleted.

      • Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported):

        • Advantages:

          This mode is non-intrusive to the source database.

        • Disadvantages:

          Does not support scenarios with source database heap tables, tables without primary keys, compressed tables, or tables with computed columns.

      • Polling and querying CDC instances for incremental synchronization:

        • Advantages:

          • Supports full and incremental synchronization when the source database is Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, or Google Cloud SQL for SQL Server.

          • Uses the native SQL Server CDC component to obtain incremental data, which makes incremental synchronization more stable and uses less network bandwidth.

        • Disadvantages:

          • The source database account used by the DTS instance must have the permission to enable CDC. Incremental data synchronization has a latency of about 10 seconds.

          • In scenarios involving synchronization of multiple databases and tables, there may be risks of stability and performance issues.

      The maximum number of tables for which CDC is enabled that DTS supports.

      Set the maximum number of tables for which CDC can be enabled for the current synchronization instance as needed. The default value is 1000.

      Note

      This configuration item is unavailable when SQL Server Incremental Synchronization Mode is set to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported).

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: Checks for tables with the same names in the destination database. If any tables with the same names are found, an error is reported during the precheck and the data synchronization task does not start. Otherwise, the precheck is successful.

        Note

        If you cannot delete or rename the table with the same name in the destination database, you can map it to a different name in the destination. For more information, see Database Table Column Name Mapping.

      • Ignore Errors and Proceed: Skips the check for tables with the same name in the destination database.

        Warning

        Selecting Ignore Errors and Proceed may cause data inconsistency and put your business at risk. For example:

        • If the table schemas are consistent and a record in the destination database has the same primary key or unique key value as a record in the source database:

          • During full data synchronization, DTS retains the destination record and skips the source record.

          • During incremental synchronization, DTS overwrites the destination record with the source record.

        • If the table schemas are inconsistent, data initialization may fail. This can result in only partial data synchronization or a complete synchronization failure. Use with caution.

      Source Objects

      In the Source Objects box, click the objects, and then click 向右 to move them to the Selected Objects box.

      Note

      You can select objects at the database, table, or column level. If you select only tables or columns, DTS does not synchronize other object types (such as views, triggers, and stored procedures).

      Selected Objects

      • To rename a single object in the destination instance, right-click the object in the Selected Objects box. For more information, see Map a single object name.

      • To rename multiple objects in bulk, click Batch Edit in the upper-right corner of the Selected Objects box. For more information, see Map multiple object names in bulk.

      Note
      • To select the SQL operations to be synchronized at the database or table level, right-click the object in the Selected Objects box and select the desired SQL operations in the dialog box that appears.

      • To filter data using a WHERE clause, right-click the table to be synchronized in the Selected Objects box and set the filter condition in the dialog box that appears. For information about how to set the condition, see Set filter conditions.

      • If you use the object name mapping feature, other objects that depend on the mapped object may fail to be synchronized.

    2. Click Next: Advanced Settings.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS uses a shared cluster for tasks, so you do not need to make a selection. For greater task stability, you can purchase a dedicated cluster to run the DTS synchronization task. For more information, see What is a DTS dedicated cluster?.

      Retry Time for Failed Connections

      If the connection to the source or destination database fails after the synchronization task starts, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can customize the retry time to a value from 10 to 1,440 minutes. We recommend a duration of 30 minutes or more. If the connection is restored within this period, the task resumes automatically. Otherwise, the task fails.

      Note
      • If multiple DTS instances (e.g., Instance A and B) share a source or destination, DTS uses the shortest configured retry duration (e.g., 30 minutes for A, 60 for B, so 30 minutes is used) for all instances.

      • DTS charges for task runtime during connection retries. Set a custom duration based on your business needs, or release the DTS instance promptly after you release the source/destination instances.

      Retry Time for Other Issues

      If a non-connection issue (e.g., a DDL or DML execution error) occurs, DTS reports an error and immediately retries the operation. The default retry duration is 10 minutes. You can also customize the retry time to a value from 1 to 1,440 minutes. We recommend a duration of 10 minutes or more. If the related operations succeed within the set retry time, the synchronization task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than that of Retry Time for Failed Connections.

      Enable Throttling for Full Data Synchronization

      During full data synchronization, DTS consumes read and write resources from the source and destination databases, which can increase their load. To mitigate pressure on the destination database, you can limit the migration rate by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s).

      Note

      Enable Throttling for Incremental Data Synchronization

      You can also limit the incremental synchronization rate to reduce pressure on the destination database by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).

      Environment Tag

      You can select an environment tag to identify the instance as needed. No selection is required for this example.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Choose whether to set up alerts. If the synchronization fails or the latency exceeds the specified threshold, DTS sends a notification to the alert contacts.

    3. Click Data Verification to configure a data verification task.

      To use the data verification feature, see Configure data verification.

  6. Save the task and perform a precheck.

    • To view the parameters for configuring this instance via an API operation, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the tooltip.

    • If you have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before a synchronization task starts, DTS performs a precheck. You can start the task only if the precheck passes.

    • If the precheck fails, click View Details next to the failed item, fix the issue as prompted, and then rerun the precheck.

    • If the precheck generates warnings:

      • For non-ignorable warning, click View Details next to the item, fix the issue as prompted, and run the precheck again.

      • For ignorable warnings, you can bypass them by clicking Confirm Alert Details, then Ignore, and then OK. Finally, click Precheck Again to skip the warning and run the precheck again. Ignoring precheck warnings may lead to data inconsistencies and other business risks. Proceed with caution.

  7. Purchase the instance.

    1. When the Success Rate reaches 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the billing method and link specifications for the data synchronization instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Billing Method

      • Subscription: You pay upfront for a specific duration. This is cost-effective for long-term, continuous tasks.

      • Pay-as-you-go: You are billed hourly for actual usage. This is ideal for short-term or test tasks, as you can release the instance at any time to save costs.

      Resource Group Settings

      The resource group to which the instance belongs. The default is default resource group. For more information, see What is Resource Management?.

      Instance Class

      DTS offers synchronization specifications at different performance levels that affect the synchronization rate. Select a specification based on your business requirements. For more information, see Data synchronization link specifications.

      Subscription Duration

      In subscription mode, select the duration and quantity of the instance. Monthly options range from 1 to 9 months. Yearly options include 1, 2, 3, or 5 years.

      Note

      This option appears only when the billing method is Subscription.

    3. Read and select the checkbox for Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start, and then click OK in the OK dialog box.

      You can monitor the task progress on the data synchronization page.

Check the CDC status

You can use the following information to check the CDC status or disable CDC.

Note

To use the following SQL statements, replace the variables in them.

  • Check the CDC status:

    SELECT name, is_cdc_enabled FROM sys.databases WHERE name = '<your db name>';
  • Check the CDC job status:

    SELECT database_name(database_id), job_type FROM [msdb].[dbo].[cdc_jobs] WHERE database_id = DB_ID('<your db name>');
  • Check if CDC is working correctly:

    • Check the disk space usage.

      SELECT * FROM sys.dm_db_log_space_usage;
    • Check if the Agent service is working correctly. For more information, see SQL Server Agent.

    • Check if CDC is scanning data correctly. If it is not, the DTS task may retry or stop.

      SELECT * FROM sys.dm_cdc_log_scan_sessions;
  • Disable CDC at the database level:

    USE [<your db name>];
    
    DROP TRIGGER [dts_cdc_sync_ddl] ON database; 
    # Ignore if it does not exist (must delete the trigger first)
    
    EXECUTE [sys].[sp_cdc_disable_db];