All Products
Search
Document Center

Data Transmission Service:Migrate ApsaraDB for MongoDB (replica set architecture) to ApsaraDB for MongoDB (replica set or sharded cluster architecture)

Last Updated:Feb 04, 2026

This topic describes how to use Data Transmission Service (DTS) to migrate data from an ApsaraDB for MongoDB instance with a replica set architecture to another ApsaraDB for MongoDB instance that uses either a replica set or sharded cluster architecture.

Supported source and destination databases

Source database (replica set architecture)

Destination database (replica set or sharded cluster architecture)

ApsaraDB for MongoDB

ApsaraDB for MongoDB

Self-managed database hosted on ECS

Self-managed database hosted on ECS

Self-managed database connected over a leased line, VPN Gateway, or Smart Access Gateway

Self-managed database connected over a leased line, VPN Gateway, or Smart Access Gateway

Self-managed database with a public IP address

Self-managed database with a public IP address

This topic explains the configuration process using ApsaraDB for MongoDB (ReplicaSet architecture) and ApsaraDB for MongoDB (ReplicaSet or sharded cluster architecture) as examples. The configuration is similar for other data sources.

Prerequisites

  • Create the source ApsaraDB for MongoDB instance (replica set architecture) and the destination ApsaraDB for MongoDB instance (replica set or sharded cluster architecture). For more information, see Create a replica set instance and Create a sharded cluster instance.

    Note

    For supported versions, see Overview of migration solutions.

  • Ensure that the storage capacity of the destination ApsaraDB for MongoDB instance is at least 10% larger than that of the source ApsaraDB for MongoDB instance.

  • If the target ApsaraDB for MongoDB instance is a sharded cluster, you need to create the databases and collections to be sharded, configure data sharding, enable the Balancer, and perform pre-sharding in the target ApsaraDB for MongoDB instance as needed. For more information, see Configure data sharding to maximize shard performance and How to handle uneven data distribution in a MongoDB sharded cluster.

    Note

    Configuring data partitioning prevents all migrated data from being stored on a single shard, which would limit cluster performance. Enabling the Balancer and performing pre-sharding helps avoid data skew.

Notes

Type

Description

Source database limits

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the migration speed is affected.

  • The collections to be migrated must have primary keys or UNIQUE constraints, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

  • If you migrate data at the collection level and need to edit the collections, such as mapping collection names, a single data migration task can migrate a maximum of 1,000 collections. If you exceed this limit, an error is reported when you submit the task. In this case, split the collections into multiple migration tasks or configure a task to migrate the entire database.

  • A single piece of data to be migrated from the source database cannot exceed 16 MB. Otherwise, the task fails.

  • If the source database is Azure Cosmos DB for MongoDB or an Amazon DocumentDB elastic cluster, only full data migration is supported.

  • To perform incremental migration:

    The source database must have the oplog enabled, and the oplog must be retained for at least seven days. Alternatively, enable change streams and ensure that DTS can subscribe to data changes from the source database within the last seven days through change streams. Otherwise, the task may fail because it cannot obtain data changes from the source database. In extreme cases, data inconsistency or data loss may occur. Issues caused by this are not covered by the DTS Service-Level Agreement (SLA).

    Important
    • We recommend that you obtain data changes from the source database through the oplog.

    • Only MongoDB 4.0 and later support obtaining data changes through change streams.

    • If the source database is an Amazon DocumentDB (non-elastic cluster), you must manually enable Change Streams, and set Migration Method to ChangeStream and Architecture to Sharded Cluster when you configure the task.

  • Source database operation limits:

    • During schema migration and full data migration, do not change the schema of databases or collections. This includes updating data of the array type. Otherwise, the data migration task may fail, or data inconsistency may occur between the source and destination databases.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination databases. To maintain real-time data consistency, select Schema Migration, Full Data Migration, and Incremental Data Migration.

  • If a collection to be migrated contains a TTL (Time To Live) index, data inconsistency or instance latency may occur.

Other limits

  • If the destination instance is a sharded cluster instance:

    • Purge orphaned documents. Otherwise, migration performance is affected. If documents with conflicting _id values are found during migration, data inconsistency or task failure may occur.

    • Before the task starts, add a sharding key to the source data that corresponds to the sharding key in the destination instance. If you cannot add a sharding key to the source data, see Migrate data from a MongoDB instance without a sharding key to a MongoDB sharded cluster instance.

    • After the task starts, the data to be migrated must include the sharding key when you use the INSERT command. You cannot change the sharding key when you use the UPDATE command.

  • If the destination instance is a replica set instance:

    • When Access Method is set to Express Connect, VPN Gateway, or Smart Access Gateway, Public IP Address, or Cloud Enterprise Network (CEN), you must set Domain Name or IP and Port Number to the address and port of the primary node, or configure a high-availability connection address. For more information about high-availability connection addresses, see Create an instance with a high-availability MongoDB source or destination database.

    • When Access Method is Self-managed Database on ECS, enter the port of the primary node for Port Number.

  • Connecting to a MongoDB database using an SRV record is not supported.

  • We recommend that the source and destination MongoDB databases have the same version, or that you migrate from a lower version to a higher version to ensure compatibility. If you migrate from a higher version to a lower version, compatibility issues may occur.

  • Data in the admin, config, and local databases cannot be migrated.

  • If the destination collection has a unique index or its capped property is set to true, concurrent replay is not supported for the collection during incremental migration. Only single-threaded writes are supported. This may increase task latency.

  • Transaction information is not retained. Transactions from the source database are converted into individual records in the destination database.

  • When DTS writes data to the destination collection, if a primary key or unique key conflict occurs, DTS skips the corresponding write statement and retains the existing data in the destination collection.

  • If the source is a MongoDB instance earlier than version 3.6 and the destination is a MongoDB instance of version 3.6 or later, the order of fields in the data may be inconsistent after migration. This is due to differences in the execution plans of the database engines. The field-value pairs remain consistent. If your business logic involves text match queries on nested structures, assess the potential impact of this inconsistency.

  • Before you migrate data, assess the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS consumes some read and write resources of the source and destination databases, which may increase the database workload.

  • Full data migration involves concurrent INSERT operations, which can cause fragmentation in the destination collections. After full data migration is complete, the disk space used by the destination collections will be larger than that of the source collections.

  • Confirm whether the migration precision that DTS provides for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not specify the precision, DTS migrates FLOAT values with a precision of 38 digits and DOUBLE values with a precision of 308 digits.

  • DTS attempts to resume failed migration tasks within seven days. Before you switch your business to the destination instance, make sure to end or release the task, or use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance after the task is automatically resumed.

  • Because DTS writes data concurrently, the storage space used by the destination instance is 5% to 10% larger than that of the source instance.

  • To query the number of documents in the destination MongoDB instance, use the db.$table_name.aggregate([{ $count:"myCount"}]) syntax.

  • Make sure that the destination MongoDB instance does not have the same primary keys (the _id field by default) as the source instance. Otherwise, data loss may occur. If the destination instance has the same primary keys, clear the relevant data from the destination instance without affecting your business. This means deleting the documents in the destination instance that have the same _id values as the source instance.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • If the destination database is a MongoDB sharded cluster, after you switch your business to this database, you must ensure that your business operations comply with the requirements for sharded collections in that MongoDB database.

  • If the source database is MongoDB 5.0 or later and the destination database is earlier than 5.0, you cannot migrate a capped collection. This can cause the task to fail or lead to data inconsistency between the source and destination databases. This is because the behavior of capped collection changed in MongoDB 5.0, allowing explicit deletions and document size increases on update. Earlier database kernels do not support these new features.

  • Migrating time-series collections introduced in MongoDB 5.0 and later is not supported.

Special cases

If the source is a self-managed MongoDB database:

  • A primary/secondary switchover during migration will cause the migration task to fail.

  • DTS calculates latency by comparing the timestamp of the last migrated data record with the current timestamp. If the source database has not been updated for a long time, the latency information may be inaccurate. If the task shows high latency, you can perform an update operation on the source database to refresh the latency information.

Note

If you choose to migrate the entire database, you can also create a heartbeat table that is updated or written to every second.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

When the Access Method parameter of the destination database is set to Public IP Address, you are charged for Internet traffic. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration types

Migration type

Description

Schema migration

Migrates the structure of objects from the source ApsaraDB for MongoDB instance to the destination ApsaraDB for MongoDB instance.

Note

Supported objects for schema migration include DATABASE, COLLECTION, and INDEX.

Full migration

Migrates all historical data of the selected objects from the source ApsaraDB for MongoDB instance to the destination ApsaraDB for MongoDB instance.

Note

Supports migrating data in DATABASE and COLLECTION objects.

Incremental migration

After full migration completes, migrates incremental updates from the source ApsaraDB for MongoDB instance to the destination ApsaraDB for MongoDB instance.

Using Oplog

Incremental migration does not support databases created after the task starts. Supported incremental updates include the following:

  • CREATE COLLECTION, INDEX

  • DROP DATABASE, COLLECTION, INDEX

  • RENAME COLLECTION

  • Insert, update, or delete documents in collections.

    Note

    For incremental document updates, only operations using the $set command are supported.

Using ChangeStream

Supported incremental updates include the following:

  • DROP DATABASE, COLLECTION

  • RENAME COLLECTION

  • Insert, update, or delete documents in collections.

    Note

    For incremental document updates, only operations using the $set command are supported.

Database account permissions

Database

Schema migration

Full migration

Incremental migration

Source ApsaraDB for MongoDB

Read permission on the databases to be migrated and the config database.

Read permission on the databases to be migrated, the admin database, and the local database.

Destination ApsaraDB for MongoDB

dbAdminAnyDatabase permission, readWrite permission on the destination database, and read permission on the local database.

For instructions on creating and authorizing database accounts for the source and destination ApsaraDB for MongoDB instances, see Manage MongoDB database users in DMS.

Procedure

  1. Navigate to the migration task list page for the destination region using one of the following methods.

    From the DTS console

    1. Log on to the Data Transmission Service (DTS) console.

    2. In the navigation pane on the left, click Data Migration.

    3. In the upper-left corner of the page, select the region where the migration instance is located.

    From the DMS console

    Note

    The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the Data Management (DMS) console.

    2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

    3. To the right of Data Migration Tasks, select the region where the migration instance is located.

  2. Click Create Task to navigate to the task configuration page.

  3. Configure the source and destination databases.

    Warning

    After you select the source and destination instances, we recommend that you carefully read the limits displayed at the top of the page. Otherwise, the task may fail or data inconsistency may occur.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source Database

    Select Existing Connection

    • To use a database instance that has been added to the system (created or saved), select the desired database instance from the drop-down list. The database information below will be automatically configured.

      Note

      In the DMS console, this parameter is named Select a DMS database instance..

    • If you have not registered the database instance with the system, or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select MongoDB.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the source ApsaraDB for MongoDB instance resides.

    Replicate Data Across Alibaba Cloud Accounts

    In this example, a database instance under the current Alibaba Cloud account is used. Select No.

    Architecture Type

    Select Replica Set Architecture.

    • Replica Set Architecture: Achieves high availability and read/write splitting through multiple node types. For more information, see Replica set architecture.

    • Sharded Cluster Architecture: Provides three components—Mongos, Shard, and ConfigServer—and allows flexible selection of Mongos and Shard counts and configurations. For more information, see Sharded cluster architecture.

    Migration Method

    Select an incremental data migration method based on your situation.

    • Oplog (recommended):

      Available if Oplog is enabled on the source database.

      Note

      Oplog is enabled by default for self-managed MongoDB and ApsaraDB for MongoDB. Using Oplog results in lower latency for incremental migration tasks (faster log retrieval), so we recommend selecting Oplog.

    • ChangeStream: Available if Change Streams (Change Streams) are enabled on the source database.

      Note
      • If the source database is Amazon DocumentDB (non-elastic cluster), you can only select ChangeStream.

      • If Architecture is set to Sharded Cluster, you do not need to enter Shard account and Shard password.

    Instance ID

    Select the instance ID of the source ApsaraDB for MongoDB instance.

    Authentication Database Name

    Enter the name of the database to which the source ApsaraDB for MongoDB database account belongs. The default value is admin if you have not changed it.

    Database Account

    Enter the database account for the source ApsaraDB for MongoDB instance. For permission requirements, see Database account permissions.

    Database Password

    Enter the password for the database account.

    Encryption

    DTS supports three connection types: Non-encrypted, SSL-encrypted, and Mongo Atlas SSL. The options available for the Encryption parameter are determined by the values selected for the Access Method and Architecture parameters. The options displayed in the DTS console prevail.

    Note
    • MongoDB databases where the Architecture is Sharded Cluster and the Migration Method is Oplog do not support SSL-encrypted.

    • If the source database is a self-managed MongoDB database that uses the Replica Set, the Access Method is not set to Alibaba Cloud Instance, and you have selected SSL-encrypted, you can also upload a certification authority (CA) certificate to verify the connection to the source database.

    Destination Database

    Select Existing Connection

    • To use a database instance that has been added to the system (created or saved), select the desired database instance from the drop-down list. The database information below will be automatically configured.

      Note

      In the DMS console, this parameter is named Select a DMS database instance..

    • If you have not registered the database instance with the system, or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select MongoDB.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the destination ApsaraDB for MongoDB instance resides.

    Replicate Data Across Alibaba Cloud Accounts

    In this example, a database instance under the current Alibaba Cloud account is used. Select No.

    Architecture Type

    Select an architecture based on your business needs:

    • Replica Set Architecture: Achieves high availability and read/write splitting through multiple node types. For more information, see Replica set architecture.

    • Sharded Cluster Architecture: Provides three components—Mongos, Shard, and ConfigServer—and allows flexible selection of Mongos and Shard counts and configurations. For more information, see Sharded cluster architecture.

    Instance ID

    Select the instance ID of the destination ApsaraDB for MongoDB instance.

    Authentication Database Name

    Enter the name of the database to which the destination ApsaraDB for MongoDB database account belongs. The default value is admin if you have not changed it.

    Database Account

    Enter the database account for the destination ApsaraDB for MongoDB instance. For permission requirements, see Database account permissions.

    Database Password

    Enter the password for the database account.

    Encryption

    DTS supports three connection types: Non-encrypted, SSL-encrypted, and Mongo Atlas SSL. The options available for the Encryption parameter are determined by the values selected for the Access Method and Architecture parameters. The options displayed in the DTS console prevail.

    Note
    • MongoDB databases where the Architecture is Sharded Cluster do not support SSL-encrypted.

    • If the destination database is a self-managed MongoDB database that uses the Replica Set, the Access Method is not Alibaba Cloud Instance, and you select SSL-encrypted, DTS also supports uploading a CA certificate to verify the connection.

  4. After completing the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that you add the CIDR blocks of the DTS servers (either automatically or manually) to the security settings of both the source and destination databases to allow access. For more information, see Add the IP address whitelist of DTS servers.

    • If the source or destination is a self-managed database (i.e., the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

  5. Configure the task objects.

    1. On the Configure Objects page, configure the objects that you want to migrate.

      Configuration

      Description

      Migration Types

      • If you only need to perform a full migration, select both Schema Migration and Full Data Migration.

      • To perform a migration with no downtime, select Schema Migration, Full Data Migration, and Incremental Data Migration.

      Note
      • If you do not select Schema Migration, you must ensure that a database and tables to receive the data exist in the destination database. You can also use the object name mapping feature in the Selected Objects box as needed.

      • If you do not select Incremental Data Migration, do not write new data to the source instance during data migration to ensure data consistency.

      For more information about task steps, see Migration types.

      Processing Mode of Conflicting Tables

      • Precheck and Block on Error: Checks whether a collection with the same name exists in the destination database. If no such collection exists, the check passes. If a collection with the same name exists, an error is reported during precheck, and the migration task will not start.
        Note If you cannot delete or rename the collection with the same name in the destination database, change its name in the destination database. For more information, see Object name mapping.
      • Ignore Errors and Continue: Skips the check for collections with the same name in the destination database.
        Warning Selecting Ignore Errors and Continue may cause data inconsistency and business risks, such as:
        • If a record with the same primary key value as in the source database exists in the destination database, the existing record in the destination database is retained, and the record from the source database is not migrated.
        • Data initialization may fail, only partial data may be migrated, or the migration may fail.

      Capitalization of Object Names in Destination Instance

      You can configure the case policy for database and collection names in the destination instance. By default, DTS Default Policy is selected. You can also choose to align with the source or destination database default policies. For more information, see Case conversion policy for destination object names.

      Source Objects

      In the Source Objects box, click the objects to be migrated, then click 向右小箭头 to move them to the Selected Objects box.

      Note

      You can select migration objects at the DATABASE or COLLECTION granularity.

      Selected Objects

      • To set the name of a migration object in the destination instance, or to specify the object that receives data in the destination instance, right-click the migration object in the Selected Objects box to make changes. For more information, see Object name mapping.

      • To remove a selected migration object, click the object in the Selected Objects box, and then click image to move it to the Source Objects box.

      Note
      • To select incremental migration operations at the database or collection level, right-click the object in the Selected Objects box and make selections in the dialog box that appears.

      • To set filter conditions (supported only during full migration, not incremental migration), right-click the table in the Selected Objects box and configure the conditions in the dialog box that appears. For instructions, see Set filter conditions.

      • If you use object name mapping (specifying a database or collection to receive data), migration of other objects that depend on this object may fail.

    2. Click Next: Advanced Settings to configure advanced parameters.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS schedules tasks on a shared cluster. You do not need to select one. If you want more stable tasks, you can purchase a dedicated cluster to run DTS migration tasks.

      Retry Time for Failed Connections

      After the migration task starts, if the connection to the source or destination database fails, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can customize the retry time to a value from 10 to 1440 minutes. We recommend that you set the duration to more than 30 minutes. If DTS reconnects to the source and destination databases within the specified duration, the migration task automatically resumes. Otherwise, the task fails.

      Note
      • For multiple DTS instances that share the same source or destination, the network retry time is determined by the setting of the last created task.

      • Because you are charged for the task during the connection retry period, we recommend that you customize the retry time based on your business needs, or release the DTS instance as soon as possible after the source and destination database instances are released.

      Retry Time for Other Issues

      After the migration task starts, if a non-connectivity issue, such as a DDL or DML execution exception, occurs in the source or destination database, DTS reports an error and immediately begins to retry the operation. The default retry duration is 10 minutes. You can customize the retry time to a value from 1 to 1440 minutes. We recommend that you set the duration to more than 10 minutes. If the related operations succeed within the specified retry duration, the migration task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than the value of Retry Time for Failed Connections.

      Enable Throttling for Full Data Migration

      During full migration, DTS consumes read and write resources on the source and destination databases, which may increase the database load. If required, you can enable throttling for the full migration task. You can set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce the load on the destination database.

      Note
      • This configuration item is available only if you select Full Data Migration for Migration Types.

      • You can also adjust the full migration speed after the migration instance is running.

      Only one data type for primary key _id in a table of the data to be synchronized

      Indicates whether the data type of the primary key _id is unique within the same collection in the data to be migrated.

      Important
      • Select as needed. Otherwise, data loss may occur.

      • This configuration is available only if Migration Types includes Full Data Migration.

      • Yes: Unique. During full migration, DTS does not scan the data types of primary keys in the source database. For each collection, DTS migrates data corresponding to only one primary key data type.

      • No: Not unique. During full migration, DTS scans the data types of primary keys in the source database and migrates all data.

      Enable Throttling for Incremental Data Migration

      If required, you can also choose to set speed limits for the incremental migration task. You can set RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) to reduce the load on the destination database.

      Note
      • This configuration item is available only if you select Incremental Data Migration for Migration Types.

      • You can also adjust the incremental migration speed after the migration instance is running.

      Environment Tag

      Based on your situation, select an environment label to identify the instance. No selection is needed for this example.

      Configure ETL

      Based on your business needs, select whether to configure the ETL feature to process data.

      • Yes: Configures the ETL feature. You must also enter data processing statements in the text box.

      • No: Does not configure the ETL feature.

      Monitoring and Alerting

      Select whether to set alerts and receive alert notifications based on your business needs.

      • No: Does not set an alert.

      • Yes: Configure alerts by setting an alert threshold and an alert notifications. If a migration fails or the latency exceeds the threshold, the system sends an alert notification.

    3. Click Next: Data Validation to configure a data validation task.

      For more information about the data validation feature, see Configure data validation.

  6. Save the task and run a precheck.

    • To view the parameters for configuring this instance when you call the API operation, move the pointer over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the bubble that appears.

    • If you do not need to view or have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before the migration task starts, DTS performs a precheck. The task starts only after it passes the precheck.

    • If the precheck fails, click View Details next to the failed check item, fix the issue based on the prompt, and then run the precheck again.

    • If a warning is reported during the precheck:

      • For check items that cannot be ignored, click View Details next to the failed item, fix the issue based on the prompt, and then run the precheck again.

      • For check items that can be ignored, you can click Confirm Alert Details, Ignore, OK, and Precheck Again to skip the alert item and run the precheck again. If you choose to ignore a warning, it may cause issues such as data inconsistency and pose risks to your business.

  7. Purchase the instance.

    1. When the Success Rate is 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the link specification for the data migration instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Resource Group Settings

      Select the resource group to which the instance belongs. The default value is default resource group. For more information, see What is Resource Management?

      Instance Class

      DTS provides migration specifications with different performance levels. The link specification affects the migration speed. You can select a specification based on your business scenario. For more information, see Data migration link specifications.

    3. After the configuration is complete, read and select Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start. In the OK dialog box that appears, click OK.

      You can view the progress of the migration task on the Data Migration Tasks list page.

      Note
      • If the migration task does not include incremental migration, it stops automatically after the full migration is complete. After the task stops, its Status changes to Completed.

      • If the migration task includes incremental migration, it does not stop automatically. The incremental migration task continues to run. While the incremental migration task is running, the Status of the task is Running.

FAQ

  • Why do task latency and data inconsistency occur even when no data is written to the database?

    • Cause: A conflict between the automatic deletion mechanism of TTL indexes in MongoDB collections and the data synchronization mechanism of DTS can cause latency and data inconsistency in synchronization or migration tasks.

      • Missed DELETE operations during incremental writes reduce efficiency: When the TTL index on the source instance deletes expired data, it generates a DELETE record in the Oplog. DTS then synchronizes this DELETE operation. If the TTL index on the destination instance has already deleted the same data, the DELETE operation from DTS will not find the data to delete. The MongoDB engine then returns an unexpected number of affected rows. This triggers an exception handling process and reduces migration efficiency.

      • Data inconsistency caused by asynchronous deletion of expired data: A TTL index does not delete data in real time. Expired data might still exist on the source instance when it has already been deleted on the destination instance. This causes data inconsistency.

        Example:

        The MongoDB Oplog or ChangeStream records only the updated fields for an UPDATE operation. It does not record the full document before and after the update. Therefore, if an UPDATE operation cannot find the target data on the destination, DTS ignores the operation.

        Timing

        Source instance

        Destination instance

        1

        Service inserts data

        2

        DTS synchronizes the INSERT operation

        3

        Data has expired but is not yet deleted by the TTL index

        4

        Service updates the data (for example, updates the TTL index field to change the expiration time)

        5

        TTL index deletes the data

        6

        DTS synchronizes the UPDATE, but the data is not found. The operation is ignored.

        As a result, this document is missing from the destination MongoDB instance.

    • Solution: You need to temporarily modify the expiration time of the TTL index in the destination during synchronization or migration to ensure efficiency and consistency. For more information, see Best practices for synchronizing/migrating collections with TTL indexes when MongoDB is the source.