This topic was translated by AI and is currently in queue for revision by our editors. Alibaba Cloud does not guarantee the accuracy of AI-translated content. Request expedited revision

FAQ

Updated at: 2025-04-01 12:51

When using Data Transmission Service (DTS), if you receive error messages from DTS, you can check Common errors to find solutions. If you don't receive specific error messages, please match your problem and solution based on the problem category.

Problem categories

Common problems are divided into the following types:

Billing issues

How is DTS billed?

DTS provides both subscription and pay-as-you-go billing methods. For more information about billing methods, see Billing overview.

How can I view DTS bills?

For instructions on viewing DTS bills, see View bills.

Will I still be charged when an instance is paused?

  • Migration instances are not charged during the pause period.

  • You are still charged for a data synchronization instance during the period in which the instance is paused, regardless of whether the source or destination database can be connected. This is because the paused instance still consumes resources such as CPU and memory. After the instance is paused, DTS stops writing data to the destination database but continues trying to read logs from the source database. This helps resume the data synchronization instance immediately after the instance is restarted.

Why is data synchronization more expensive than data migration?

Because data synchronization has more advanced features, such as supporting online adjustment of synchronization objects and supporting two-way data synchronization between MySQL databases. Data synchronization is also based on internal network transmission, which ensures lower network latency.

What happens when an account has overdue payments?

For the impact of account overdue payments, see Expiration and overdue payment instructions.

How can I release a subscription task early?

First convert the subscription task to a pay-as-you-go task, then unsubscribe. For conversion methods, see Switch between billing methods.

Can a subscription task be converted to pay-as-you-go?

Yes. For conversion methods, see Switch between billing methods.

Can a pay-as-you-go task be converted to subscription?

Data synchronization or subscription tasks can be converted. For conversion methods, see Switch between billing methods.

Note

Data migration tasks only support pay-as-you-go billing.

Why did DTS tasks suddenly start charging?

It may be because the free period for the instance has expired. DTS tasks have certain preferential policies for tasks where the destination database is an Alibaba Cloud self-developed database engine, which are free for a certain period. Once the free period expires, charges will begin.

Why am I still being charged for a task that has been released?

DTS pay-as-you-go tasks are billed daily. Since you used DTS on the day you released the task, you will still be charged for that day.

How does pay-as-you-go billing work?

DTS pay-as-you-go billing only occurs during the normal operation of incremental tasks (including when Incremental Synchronization tasks are paused, but not when Incremental Migration tasks are paused). For more information, see Billing method.

Does DTS charge for data transfer?

Some DTS tasks charge for public network traffic and data traffic, regardless of the regions of the source and destination databases. Migration tasks where the destination database's Access Method is Public IP Address will incur public network traffic fees. Full verification tasks in Verify All Fields Based on Sampled Rows mode will charge data traffic fees based on the amount of data verified. For more information, see Billable items.

Performance and specification issues

What are the differences between different instance specifications?

For differences between instance specifications, see Data migration link specification description and Data synchronization link specification description.

Is it possible to upgrade instance specifications?

Yes. For more information, see Upgrade the link specification of an instance.

Is it possible to downgrade instance specifications?

Currently, only synchronization instances support this. For more information, see Downgrade the link specification of an instance.

Is it possible to downgrade instance specifications?

DTS does not currently support downgrading instance specifications.

How long does it take to synchronize or migrate data?

Since DTS transmission performance is affected by multiple factors such as DTS internal load, source and destination database instance load, amount of data to be transmitted, whether the DTS instance has incremental tasks, network conditions, etc., it is not possible to estimate the time required for DTS tasks. If you have high performance requirements, we recommend choosing specifications with higher performance limits. For more information about specifications, see Data migration link specification description and Data synchronization link specification description.

How can I view the performance information of data migration or data synchronization tasks?

For methods to view performance, see View incremental migration link status and performance or View synchronization link status and performance.

Why can't I find the specified DTS instance in the console?

Possible reason: If the specified DTS instance is a subscription instance, it may have been released after expiration.

  • The account's resource group selection is incorrect. We recommend selecting All Resources Of The Account.

  • The region selection for the instance is incorrect. Please verify that the selected region is the region to which the target instance belongs.

  • The task type selection for the instance is incorrect. Please verify that the current task list page is the task type of the target instance. For example, synchronization instances will only appear in the Data Synchronization Tasks list.

  • The instance has been released due to expiration or overdue payment. After a DTS instance expires or has overdue payments, the data transmission task will stop service. If payment is not successfully made within 7 days, the system will release and delete the instance. For more information, see Expiration and overdue payment instructions.

Precheck issues

Why is there a warning for the Redis eviction policy check item?

If the destination's data eviction policy (maxmemory-policy) is configured to a value other than noeviction, it may cause the destination data to be inconsistent with the source data. For details about data eviction policies, see Introduction to Redis data eviction policies.

How do I handle binlog-related precheck failures during incremental data migration?

Check if the source database Binlog is normal. For details, see Source database Binlog check.

Database connection issues

How do I handle source database connection failures?

Check if the source database information and settings are correct. For details, see Source database connectivity check.

How do I handle destination database connection failures?

Check if the destination database information and settings are correct. For details, see Destination database connectivity check.

How can I perform data migration and synchronization when the source or destination instance is in a region not supported by DTS?

  • For data migration tasks, you can apply for a public endpoint for the database instance (such as RDS MySQL) and access it as a Public IP. The instance region can be selected from regions supported by DTS, and the corresponding DTS server IP address ranges should be added to the instance's whitelist. For the IP whitelist that needs to be added, see Add the IP address ranges of DTS servers.

  • For data synchronization tasks, since data synchronization does not support accessing database instances as Public IP, DTS does not currently support data synchronization in these regions.

Data synchronization issues

Which database instances does DTS support for synchronization?

DTS supports data synchronization between various data sources, such as relational database management systems (RDBMS), NoSQL databases, and online analytical processing (OLAP) databases. For supported database instances for synchronization, see Synchronization solution overview.

What is the difference between data migration and data synchronization?

The differences between data migration and data synchronization are shown in the following table.

Note

Self-managed database: When configuring a DTS instance, a database instance whose Access Method is not Alibaba Cloud Instance. Self-managed databases include third-party cloud database instances, databases deployed locally, and databases deployed on ECS instances.

Comparison item

Data migration

Data synchronization

Application scenarios

Mainly used for cloud migration, such as migrating local databases, self-managed databases on ECS, or third-party cloud databases to Alibaba Cloud databases.

Mainly used for real-time data synchronization between two data sources, suitable for scenarios such as geo-disaster recovery, data disaster recovery, cross-border data synchronization, query and report offloading, cloud BI, and real-time data warehousing.

Supported databases

See Migration solution overview.

Note

For some databases that do not support data synchronization, you can use data migration to achieve data synchronization. For example, standalone MongoDB databases and OceanBase (MySQL mode) databases.

See Synchronization solution overview.

Supported database deployment locations (connection methods)

  • Alibaba Cloud instances

  • Self-managed databases with public IP addresses

  • Self-managed databases accessed through Database Gateway (DG)

  • Self-managed databases accessed through Cloud Enterprise Network (CEN)

  • Self-managed databases on ECS

  • Self-managed databases accessed through dedicated lines/VPN Gateway/Smart Access Gateway

  • Alibaba Cloud instances

  • Self-managed databases accessed through Database Gateway (DG)

  • Self-managed databases accessed through Cloud Enterprise Network (CEN)

  • Self-managed databases on ECS

  • Self-managed databases accessed through dedicated lines/VPN Gateway/Smart Access Gateway

Note

Data synchronization is based on internal network transmission, which ensures lower network latency.

Feature differences

  • Supports database, table, and column level object name mapping.

  • Supports filtering data to be migrated.

  • Supports selecting SQL operation types to migrate, such as selecting only INSERT operations.

  • Supports reading VPCs under other Alibaba Cloud accounts, which enables cross-Alibaba Cloud account migration of self-managed databases in VPCs.

  • Supports database, table, and column level object name mapping.

  • Supports filtering data to be synchronized.

  • Supports online modification of synchronization objects.

  • Supports two-way synchronization between MySQL databases.

  • Supports selecting SQL operation types to synchronize, such as selecting only INSERT operations.

Billing method

Only supports pay-as-you-go.

Supports both pay-as-you-go and subscription.

Is there a charge

Migration instances that include incremental migration tasks will incur corresponding fees.

Yes. Synchronization instances include incremental synchronization tasks by default, so synchronization instances will necessarily incur corresponding fees.

Billing rules

Only charged during the normal operation of incremental data migration (excluding the pause period of incremental data migration). No charges during schema migration and full data migration periods.

  • For pay-as-you-go, charges only apply during the normal operation of incremental data synchronization (including the pause period of incremental data synchronization). No charges during schema synchronization and full data synchronization periods.

  • For subscription, a one-time fee is charged based on the configuration and purchase amount selected at the time of purchase.

What is the working principle of data synchronization?

For the working principle of data synchronization, see Product architecture and functional principles.

How is synchronization latency calculated?

Synchronization latency refers to the difference between the timestamp of the latest data synchronized to the destination database and the current timestamp of the source database. The unit is milliseconds.

Note

Under normal circumstances, the latency is within 1000 milliseconds.

Can synchronization objects be modified for data synchronization tasks?

Yes. For methods to modify synchronization objects, see Add synchronization objects and Remove synchronization objects.

Can new tables be added for synchronization in data synchronization tasks?

Yes. For methods to add new tables, see Add synchronization objects.

How can I modify synchronization objects such as tables and fields in a running synchronization task?

When the full synchronization phase of the synchronization task ends and enters the incremental data synchronization phase, you can modify the synchronization objects. For methods to modify synchronization objects, see Add synchronization objects and Remove synchronization objects.

Will pausing a synchronization task and restarting it after a period of time cause data inconsistency?

If there are changes in the source database during the pause period of the synchronization task, it may cause data inconsistency between the source and destination databases. After the synchronization task is restarted and the incremental data is synchronized to the destination database, the destination database data will be consistent with the source database.

If data is deleted in the source database of an incremental synchronization task, will the synchronized data in the destination database be deleted?

If the DML operations to be synchronized by the incremental synchronization task do not have delete checked, the data in the destination database will not be deleted. Otherwise, the synchronized data in the destination database will be deleted.

For synchronization between Redis instances, will the data in the destination Redis instance be overwritten?

Data with the same key will be overwritten. DTS will check the destination during the precheck phase, and if the destination data is not empty, an error will be reported.

Does the synchronization task support filtering certain fields or data?

Yes. You can filter columns that do not need to be synchronized through the mapping function, and filter data to be synchronized by specifying SQL Where conditions. For more information, see Synchronize or migrate partial columns and Filter task data through SQL conditions.

Can a synchronization task be converted to a migration task?

No, different types of tasks do not support mutual conversion.

Is it possible to synchronize only data without synchronizing structure?

Yes. When configuring the synchronization task step, simply do not check Schema Synchronization.

What are the possible reasons for data inconsistency between the source and destination of a data synchronization instance?

Possible reasons for data inconsistency include the following:

  1. The destination data was not cleared when configuring the task, and there was existing data in the destination.

  2. Only the incremental synchronization module was selected when configuring the task, without selecting the full synchronization module.

  3. Only the full synchronization module was selected when configuring the task, without selecting the incremental synchronization module, and there were data changes in the source after the task ended.

  4. There was data written to the destination other than from the DTS task.

  5. There is a delay in incremental writing, and not all incremental data has been written to the destination.

Can the name of the source database be modified in the destination database for data synchronization tasks?

Yes. For methods to modify the source database name in the destination database, see Set the name of a synchronization object in the destination instance.

Is real-time synchronization of DML or DDL operations supported?

Yes, data synchronization between relational databases supports DML operations such as INSERT, UPDATE, DELETE, and DDL operations such as CREATE, DROP, ALTER, RENAME, TRUNCATE.

Note

The supported DML or DDL operations vary in different scenarios. Please select the link that matches your business scenario in Synchronization solution overview, and check the supported DML or DDL operations in the specific link configuration document.

Can a read-only instance be used as the source instance for a synchronization task?

Synchronization tasks include incremental data synchronization by default, so there are two scenarios:

  • If the instance is a read-only instance that records transaction logs (such as RDS MySQL 5.7 or 8.0 versions), it can be used as a source instance.

  • If the instance is a read-only instance that does not record transaction logs (such as RDS MySQL 5.6 version), it cannot be used as a source instance.

Does DTS support data synchronization for sharded databases and tables?

Yes. For example, you can synchronize sharded databases and tables from MySQL or PolarDB MySQL to AnalyticDB for MySQL to achieve multi-table consolidation.AnalyticDB for MySQL

Why is the data volume in the destination instance smaller than in the source instance after the synchronization task ends?

If data filtering was performed during the synchronization process, or if there are many table fragments in the source instance, the data volume in the destination instance may be smaller than in the source instance after the migration is complete.

Does a cross-account data synchronization task support two-way synchronization?

Currently, bidirectional sync tasks across accounts are only supported between RDS MySQL instances, between PolarDB for MySQL clusters, between Tair (Enterprise Edition) instances, between ApsaraDB for MongoDB (ReplicaSet architecture) instances, and between ApsaraDB for MongoDB (sharded cluster architecture) instances.

Does DTS support cross-border two-way synchronization tasks?

No.

Why is a record added to one database in a two-way synchronization task not added to the other database?

It may be because the reverse task has not been configured.

Why does the incremental display of a synchronization task never reach 100%?

DTS incremental synchronization continuously synchronizes changes from the source to the destination in real-time and does not actively end, meaning there is no 100% completion state. If you no longer need real-time synchronization, please end the task in the DTS console.

Why can't an incremental synchronization task synchronize data?

If a DTS instance is only configured with an incremental synchronization task, DTS will only synchronize incremental data after the task starts, and data before the task starts will not be synchronized to the destination database. We recommend checking Incremental Synchronization, Schema Synchronization, and Full Synchronization when configuring the task to ensure data consistency.

When synchronizing full data from an RDS database, will it affect the performance of the source RDS?

It will affect the query performance of the source database. There are three methods to reduce the impact of DTS tasks on the source database:

  1. Increase the specification of the source database instance.

  2. Pause the DTS task first, and restart the task after the source database load decreases.

  3. Reduce the rate of the DTS task. For methods to adjust the rate, see Adjust the full migration rate.

Why doesn't a synchronization instance with PolarDB-X 1.0 as the source display latency?

Instances with PolarDB-X 1.0 as the source are distributed tasks, and DTS monitoring metrics only exist in subtasks. Therefore, instances with PolarDB-X 1.0 as the source do not display latency information. You can click on the instance ID and view the latency information in the Task Management section under Subtask Details.

Why does a multi-table consolidation task report error DTS-071001?

It may be because an Online DDL operation was performed on the source database during the multi-table consolidation task, modifying the table structure or other aspects of the source database, and the corresponding modifications were not manually made in the destination database.

How do I handle whitelist addition failures when configuring tasks in the old console?

Please use the new console to configure tasks.

How do I handle task failures caused by DDL operations on the source database during DTS data synchronization?

Manually execute the DDL on the destination side based on the DDL operation executed on the source database, then restart the task. During data synchronization, please do not use tools like pt-online-schema-change to perform online DDL changes on synchronization objects in the source database, as this will cause synchronization failures. If no data other than DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL changes, or you can remove tables affected by DDL through modifying synchronization objects. For removal operations, see Remove synchronization objects.

How do I handle task failures caused by DDL operations on the destination database during DTS data synchronization?

If a database or table in the destination database is deleted during DTS incremental synchronization, causing the task to fail, you can use one of the following two methods to recover the task:

  • Method 1: Reconfigure the task, and do not select the database or table that caused the task to fail as the synchronization object.

  • Method 2: Modify the synchronization objects to remove the database or table that caused the task to fail. For specific operations, see Remove synchronization objects.

Can a synchronization task be restored after it is released? Can reconfiguring the task ensure data consistency?

A synchronization task cannot be restored after it is released. When reconfiguring the task, if you do not select Full Synchronization, data added during the period from task release to new task startup cannot be synchronized to the destination database, and data consistency cannot be guaranteed. If your business requires precise data, you can delete the data in the destination database, then reconfigure the synchronization task, and select Schema Synchronization and Full Synchronization in Task Steps (with Incremental Synchronization selected by default).

What should I do if a DTS full synchronization task has no progress for a long time?

If the tables to be synchronized are tables without primary keys, full synchronization will be very slow. It is recommended to add primary keys to the tables to be synchronized in the source database before performing synchronization.

When synchronizing tables with the same name, is it possible to only transmit source table data when it does not exist in the destination table?

Yes. When configuring the task, you can set Processing Mode For Tables That Already Exist In The Destination to Ignore Errors And Continue. With consistent table structures, during full synchronization, when the destination database encounters records with the same primary key values as the source database, those records from the source database will not be synchronized to the destination database.

How do I configure a cross-account synchronization task?

You need to use the Alibaba Cloud account that owns the source instance to configure RAM authorization, and then use the Alibaba Cloud account (primary account) that owns the destination instance to configure the DTS task. For more information, see the configuration example in Configure cross-Alibaba Cloud account tasks.

How do I handle being unable to select a DMS LogicDB instance?

Please ensure that the region to which the instance belongs is selected correctly. If you still cannot select the instance, it may be because there is only one instance. Please continue to configure other parameters.

Does a synchronization task with SQL Server as the source support synchronizing functions?

No. If the granularity of the selected synchronization object is a table, other objects (such as views, triggers, stored procedures) will also not be synchronized to the destination database.

How do I handle data synchronization task errors?

You can check the solution in Common errors based on the error message.

How do I enable hot spot merging for a synchronization task?

Please refer to Modify parameter values to change the value of trans.hot.merge.enable to true.

How do I perform synchronization when the source database has triggers?

When the synchronization object is an entire database, and triggers (TRIGGER) in the database will update a table within the database, this may cause data inconsistency between the source and destination databases. For synchronization operations, see How to configure synchronization or migration jobs when the source database has triggers.

Does DTS support synchronizing the sys library and system libraries?

No.

Does DTS support synchronizing MongoDB's admin and local libraries?

No, DTS does not support using MongoDB's admin and local as source and destination databases.

When can the reverse task of a two-way synchronization task be configured?

The reverse task of a two-way synchronization task can only be configured after the forward incremental task has no delay.

When PolarDB-X 1.0 is the source, does the synchronization task's source PolarDB-X 1.0 support node expansion or contraction?

No. If the source PolarDB-X 1.0 undergoes node expansion or contraction, you need to reconfigure the task.

Can DTS ensure the uniqueness of data synchronized to Kafka?

No. Since data written to Kafka is in an append form, there may be duplicate data when the DTS task is restarted or when source logs are pulled repeatedly. DTS ensures data idempotence, meaning data is arranged in order, and the latest value of duplicate data will be placed at the end.

Does DTS data synchronization support RDS MySQL to AnalyticDB for MySQL?AnalyticDB for MySQL

Yes. For configuration methods, see Synchronize from RDS MySQL to AnalyticDB for MySQL 3.0.

Why doesn't a synchronization task between Redis instances display full synchronization?

Synchronization between Redis instances supports both full data synchronization and incremental data synchronization, which are combined and displayed as Incremental Synchronization.

Can full synchronization be skipped?

Yes. After skipping full synchronization, incremental synchronization will continue, but errors may occur. It is recommended not to skip full synchronization.

Does DTS support scheduled automatic synchronization?

DTS does not currently support scheduled start of data synchronization tasks.

Will table fragment space also be synchronized during the synchronization process?

No.

What should I be aware of when synchronizing from MySQL 8.0 to MySQL 5.6?

You need to create the database in MySQL 5.6 before performing the synchronization operation. It is recommended to keep the source and destination database versions consistent, or synchronize from a lower version to a higher version to ensure compatibility. When synchronizing from a higher version to a lower version, database compatibility issues may exist.

Can accounts from the source database be synchronized to the destination database?

Currently, only synchronization tasks between RDS MySQL instances support synchronizing accounts. Other synchronization tasks do not currently support this.

Can I configure a cross-account two-way synchronization task?

Currently, bidirectional sync tasks across accounts are only supported between RDS MySQL instances, between PolarDB for MySQL clusters, between Tair (Enterprise Edition) instances, between ApsaraDB for MongoDB (ReplicaSet architecture) instances, and between ApsaraDB for MongoDB (sharded cluster architecture) instances.

Note

For tasks without a Replicate Data Across Alibaba Cloud Accounts configuration item, you can try using CEN to implement cross-account two-way synchronization tasks. For more information, see Access database resources across Alibaba Cloud accounts or regions.

How do I configure parameters when Message Queue for Apache Kafka is the destination?

Please configure according to your actual situation. For configuration methods of some special parameters, see Configure parameters for Message Queue for Apache Kafka instances.

Data migration issues

After executing a data migration task, does the data in the source database still exist?

DTS data migration and synchronization copy data from the source database to the destination database, without affecting the source data.

Which database instances does DTS support for migration?

DTS supports data migration between various data sources, such as relational database management systems (RDBMS), NoSQL databases, and online analytical processing (OLAP) databases. For supported migration instances, see Migration solution overview

What is the working principle of data migration?

For the working principle of data migration, see Product architecture and functional principles.

Can migration objects be modified for data migration tasks?

No.

Can new tables be added for migration in data migration tasks?

No.

How can I modify migration objects such as tables and fields in a running migration task?

Migration tasks do not support modifying migration objects.

Will pausing a migration task and restarting it after a period of time cause data inconsistency?

If there are changes in the source database during the pause period of the migration task, it may cause data inconsistency between the source and destination databases. After the migration task is restarted and the incremental data is migrated to the destination database, the destination database data will be consistent with the source database.

Can a migration task be converted to a synchronization task?

No, different types of tasks do not support mutual conversion.

Is it possible to migrate only data without migrating structure?

Yes. When configuring the migration task step, simply do not check Schema Migration.

What are the possible reasons for data inconsistency between the source and destination of a data migration instance?

Possible reasons for data inconsistency include the following:

  1. The destination data was not cleared when configuring the task, and there was existing data in the destination.

  2. Only the incremental migration module was selected when configuring the task, without selecting the full migration module.

  3. Only the full migration module was selected when configuring the task, without selecting the incremental migration module, and there were data changes in the source after the task ended.

  4. There was data written to the destination other than from the DTS task.

  5. There is a delay in incremental writing, and not all incremental data has been written to the destination.

Can the name of the source database be modified in the destination database for data migration tasks?

Yes. For methods to modify the source database name in the destination database, see Database, table, and column mapping.

Is data migration within the same instance supported?

Yes. For methods of data migration within the same instance, see Data synchronization or migration between different database names.

Is real-time migration of DML or DDL operations supported?

Yes, data between relational databases supports DML operations such as INSERT, UPDATE, DELETE, and DDL operations such as CREATE, DROP, ALTER, RENAME, TRUNCATE.

Note

The supported DML or DDL operations vary in different scenarios. Please select the link that matches your business scenario in Migration solution overview, and check the supported DML or DDL operations in the specific link configuration document.

Can a read-only instance be used as the source of a migration task?

If the migration task does not require incremental data migration, a read-only instance can be used as the source instance. If the migration task requires incremental data migration, there are two scenarios:

  • If the instance is a read-only instance that records transaction logs (such as RDS MySQL 5.7 or 8.0 versions), it can be used as a source instance.

  • If the instance is a read-only instance that does not record transaction logs (such as RDS MySQL 5.6 version), it cannot be used as a source instance.

Does DTS support data migration for sharded databases and tables?

Supported operations include migrating sharded tables from MySQL and PolarDB MySQL to AnalyticDB for MySQL to achieve table consolidation.

Does the migration task support filtering certain fields or data?

Yes. You can filter columns that do not need to be migrated through the mapping function, and filter data to be migrated by specifying SQL Where conditions. For more information, see Synchronize or migrate partial columns and Filter data to be migrated.

Why is the data volume in the destination instance smaller than in the source instance after the migration task ends?

If data filtering was performed during the migration process, or if there are many table fragments in the source instance, the data volume in the destination instance may be smaller than in the source instance after the migration is complete.

Why does the completed value displayed for a migration task exceed the total?

The displayed total is an estimated value, and after the migration task is completed, the total will be adjusted to an accurate value.

What is the purpose of the increment_trx table added to the destination database during data migration?

During data migration, an increment_trx table is added to the destination database. This is a position table created by DTS incremental migration in the destination instance, mainly used to record the position of incremental migration and solve the problem of breakpoint continuation after task exceptions. Do not delete it during the migration process, otherwise, it will cause migration failure.

Does the data migration task support breakpoint continuation during the full migration phase?

Yes. If you pause the task during the full migration phase and then restart it, the task will continue from the position where the migration was completed, without needing to start over.

How do I migrate non-Alibaba Cloud instances to Alibaba Cloud?

For methods to migrate non-Alibaba Cloud instances to Alibaba Cloud, see Migrate from third-party cloud to Alibaba Cloud.

How do I migrate a local Oracle database to PolarDB?

For methods to migrate a local Oracle database to PolarDB, see Migrate from self-managed Oracle to PolarDB PostgreSQL (Compatible with Oracle).

Can a data migration task that has not completed the full migration phase be paused?

Yes.

How do I migrate partial data from RDS MySQL to a self-managed MySQL?

During the migration task configuration process, you can select the objects to be migrated in Source Objects or filter them in Selected Objects according to your needs. The migration between MySQL instances is similar. You can refer to Migrate from self-managed MySQL to RDS MySQL for operation guidance.

How do I migrate between RDS instances under the same Alibaba Cloud account?

DTS supports migration and synchronization between RDS instances. For configuration methods, see the relevant configuration documents in Migration solution overview.

How can I ensure source database business stability when IOPS alarms occur in the source database after a migration task starts?

When DTS tasks are running, if the source database instance load is relatively high, there are three methods to reduce the impact of DTS tasks on the source database:

  1. Increase the specification of the source database instance.

  2. Pause the DTS task first, and restart the task after the source database load decreases.

  3. Reduce the rate of the DTS task. For methods to adjust the rate, see Adjust the full migration rate.

Why can't a database named test be selected for data migration tasks?

DTS data migration does not support migrating system databases. Please select business-created databases for migration.

Why doesn't a migration instance with PolarDB-X 1.0 as the source display latency?

Instances with PolarDB-X 1.0 as the source are distributed tasks, and DTS monitoring metrics only exist in subtasks. Therefore, instances with PolarDB-X 1.0 as the source do not display latency information. You can click on the instance ID and view the latency information in the Task Management section under Subtask Details.

Why can't DTS migrate MongoDB databases?

It may be because the database to be migrated is local or admin. DTS does not support using MongoDB's admin and local as source and destination databases.

Why does a multi-table consolidation task report error DTS-071001?

It may be because an Online DDL operation was performed on the source database during the multi-table consolidation task, modifying the table structure or other aspects of the source database, and the corresponding modifications were not manually made in the destination database.

How do I handle whitelist addition failures when configuring tasks in the old console?

Please use the new console to configure tasks.

How do I handle task failures caused by DDL operations on the source database during DTS data migration?

Manually execute the DDL on the destination side based on the DDL content executed on the source database, then restart the task. During data migration, please do not use tools like pt-online-schema-change to perform online DDL changes on migration objects in the source database, as this will cause migration failures. If no data other than DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL changes.

How do I handle task failures caused by DDL operations on the destination database during DTS data migration?

If a database or table in the destination database is deleted during DTS incremental migration, causing the task to fail, you can reconfigure the task and not select the database or table that caused the task to fail as the migration object.

Can a migration task be restored after it is released? Can reconfiguring the task ensure data consistency?

A migration task cannot be restored after it is released. When reconfiguring the task, if you do not select Full Migration, data added during the period from task release to new task startup cannot be migrated to the destination database, and data consistency cannot be guaranteed. If your business requires precise data, you can delete the data in the destination database, then reconfigure the migration task, and select Task Steps to include Schema Migration, Incremental Migration, and Full Migration.

What should I do if a DTS full migration task has no progress for a long time?

If the tables to be migrated are tables without primary keys, full migration will be very slow. It is recommended to add primary keys to the tables to be migrated in the source database before performing migration.

When migrating tables with the same name, is it possible to only transmit source table data when it does not exist in the destination table?

Yes. When configuring the task, you can set Processing Mode For Tables That Already Exist In The Destination to Ignore Errors And Continue. With consistent table structures, during full migration, when the destination database encounters records with the same primary key values as the source database, those records from the source database will not be migrated to the destination database.

How do I configure a cross-account migration task?

You need to use the Alibaba Cloud account that owns the source instance to configure RAM authorization, and then use the Alibaba Cloud account (primary account) that owns the destination instance to configure the DTS task. For more information, see the configuration example in Configure cross-Alibaba Cloud account tasks.

How do I connect to a local database for a data migration task?

You can select Connection Method as Public IP for the local database to configure the migration task. For example operations, see Migrate from self-managed MySQL to RDS MySQL.

How do I handle data migration failure with error DTS-31008?

You can click View Reason or check the solution in Common errors based on the error message.

How do I handle network connectivity issues when accessing self-managed databases through dedicated lines?

Please check if the dedicated line is correctly configured with DTS-related IP whitelists. For the IP whitelist that needs to be added, see Add the IP address ranges of DTS servers to the whitelist of the self-managed database.

Does a migration task with SQL Server as the source support migrating functions?

No. If the migration object selection granularity is at the table level, other objects (such as views, triggers, stored procedures) will also not be migrated to the destination database.

How do I handle slow DTS full migration speed?

It may be because the amount of data to be migrated is relatively large. Please be patient. You can enter the task details page and check the migration progress in the Task Management section under Full Migration.

How do I handle schema migration errors?

Click on the instance ID to enter the task details page, and check the specific error message in the schema migration module under Task Management, then resolve the issue based on the specific error message. For common error solutions, see Common errors.

Are schema migration and full migration charged?

No. For more billing information, see Billable items.

For data migration tasks between Redis instances, will the zset data in the destination be overwritten?

The zset in the destination will be overwritten. If the destination already has a key that is the same as the source, DTS will first delete the corresponding key's zset in the destination, and then zadd each object from the source zset collection to the destination.

What impact does full migration have on the source database?

The DTS full migration process first performs data slicing, then reads and writes data within the slice range. For the source database, the IOPS will increase during the slicing process; during the process of reading data within the slice range, there will be some impact on the source database's IOPS, CachePool, and outbound bandwidth. Based on DTS's practical experience, these impacts are negligible.

When PolarDB-X 1.0 is the source, does the migration task's source PolarDB-X 1.0 support node expansion or contraction?

No. If the source PolarDB-X 1.0 undergoes node expansion or contraction, you need to reconfigure the task.

Can DTS ensure the uniqueness of data migrated to Kafka?

No. Since data written to Kafka is in an append form, there may be duplicate data when the DTS task is restarted or when source logs are pulled repeatedly. DTS ensures data idempotence, meaning data is arranged in order, and the latest value of duplicate data will be placed at the end.

Will data inconsistency occur if I first configure a full migration task and then configure an incremental data migration task?

Data inconsistency may occur. When incremental data migration tasks are configured separately, they only start migrating data after the incremental migration task is started. Incremental data generated in the source instance before the incremental migration task starts will not be synchronized to the destination instance. If you need to perform migration without stopping the service, we recommend selecting schema migration, full data migration, and incremental data migration when configuring the task.

Do I need to check schema migration when configuring an incremental migration task?

Schema migration occurs before data migration begins, first migrating the definitions of migration objects to the destination instance, such as migrating the table definition of table A to the destination instance. If you need to perform incremental migration, to ensure data migration consistency, we recommend checking schema migration, full data migration, and incremental data migration.

Why does the storage space used by RDS become larger than the source database during migration from a self-managed database to RDS?

Because DTS performs logical migration, it packages the data to be migrated into SQL and then migrates it to the destination RDS instance. At this time, Binlog data will be generated in the destination RDS instance, so during the migration process, the storage space used by RDS may be larger than the source database.

Does DTS support migration of MongoDB in VPC networks?

Yes, DTS currently supports using ApsaraDB for MongoDB in VPC networks as the source database for migration.

What will happen to the migration data if the source database changes during data migration?

If the migration task is configured with schema migration, full migration, and incremental migration, then data changes that occur in the source database during migration will all be migrated to the destination database by DTS.

Will releasing a completed migration task affect the use of the migrated database?

No. After the migration task is completed (with Running Status showing Completed), you can safely release the migration task.

Does DTS support MongoDB incremental migration?

Yes. For related configuration cases, see Migration solution overview.

What is the difference between using an RDS instance and a self-managed database instance accessed through a public IP as the source instance for a migration task?

When configuring a migration task and selecting an RDS instance, if the RDS instance undergoes DNS modifications, network type switching, or other changes, the DTS migration task can adapt automatically, effectively ensuring link reliability.

Does DTS support migrating self-managed databases on ECS in VPC to RDS instances?

Yes.

  • If the source ECS instance and the destination RDS instance are in the same region, DTS can directly access self-managed databases on ECS instances in VPC.

  • If the source ECS instance and the destination RDS instance are in different regions, the ECS instance needs to have an Elastic IP attached. When configuring the migration task, select the ECS instance as the source instance, and DTS will automatically use the ECS instance's Elastic IP to access the database on the ECS instance.

Does DTS lock tables during migration? Does it affect the source database?

DTS does not lock tables in the source database during both full data migration and incremental data migration. During full data migration and incremental data migration, the data tables in the migration source can be accessed normally for reading and writing.

Does DTS pull data from the primary or secondary database of RDS during RDS migration?

When DTS performs data migration, it pulls data from the primary database of RDS.

Does DTS support scheduled automatic migration?

DTS does not currently support scheduled start of data migration tasks.

Does DTS support data migration for RDS instances in VPC mode?

Yes. When configuring the migration task, simply configure the RDS instance ID directly.

When DTS performs same-account or cross-account migration and synchronization, does it use internal network or public network for ECS and RDS instances? Are there traffic fees?

The network (internal or public) used by DTS for synchronization or migration tasks is not related to whether it crosses accounts, and whether traffic fees are charged depends on the task type.

  • Network used

    • Migration tasks: If performing data migration within the same region, DTS will use the internal network to connect to ECS and RDS instances. If performing cross-region migration, DTS will use the external network to connect to the source instance (ECS, RDS) and the internal network to connect to the destination RDS instance.

    • Synchronization tasks: Uses the internal network.

  • Traffic fees

    • Migration tasks: Public network outbound traffic fees are charged, while other types of DTS instances do not incur traffic fees. Public network outbound traffic fees refer to traffic fees incurred when the destination database instance's Access Method is Public IP Address.

    • Synchronization tasks: No traffic fees are charged.

When using DTS for data migration, will the data in the source database be deleted after migration?

No, when DTS performs data migration, it actually copies the data from the source database to the destination database, without affecting the data in the source database.

When DTS performs data migration between RDS instances, can the name of the migration destination database be specified?

Yes. When performing data migration between RDS instances, you can use the database name mapping function provided by DTS to specify the name of the migration destination database. For details, see Data synchronization or migration between different database names.

How do I handle DTS migration tasks that cannot connect to ECS instances as the source?

It may be because the ECS instance does not have a public IP enabled. Please bind an Elastic IP to the ECS instance and try again. For methods to bind an Elastic IP, see Elastic IP Address.

Why doesn't a migration task between Redis instances display full migration?

Migration between Redis instances supports both full data migration and incremental data migration, which are combined and displayed as Incremental Migration.

Can full migration be skipped?

Yes. After skipping full migration, incremental migration will continue, but errors may occur. It is recommended not to skip full migration.

Does the cluster version of Redis support accessing DTS through a public IP?

No, currently only the standalone version of Redis supports accessing DTS migration instances through a public IP.

What should I be aware of when migrating from MySQL 8.0 to MySQL 5.6?

You need to create the database in MySQL 5.6 before performing the migration operation. It is recommended to keep the source and destination database versions consistent, or migrate from a lower version to a higher version to ensure compatibility. If migrating from a higher version to a lower version, database compatibility issues may exist.

Can accounts from the source database be migrated to the destination database?

Currently, only migration tasks between RDS MySQL instances support migrating accounts. Other migration tasks do not currently support this.

How do I configure parameters when Message Queue for Apache Kafka is the destination?

Please configure according to your actual situation. For configuration methods of some special parameters, see Configure parameters for Message Queue for Apache Kafka instances.

How do I perform scheduled full migration?

You can use the scheduling strategy configuration of the data integration feature to periodically migrate the structure and existing data from the source database to the destination database. For more information, see Configure data integration tasks between RDS MySQL instances.

Is it possible to migrate SQL Server from ECS to a local self-managed SQL Server?

Yes. The local self-managed SQL Server needs to be connected to Alibaba Cloud. For details, see Preparation overview.

Is migration of PostgreSQL databases from other clouds supported?

When PostgreSQL databases from other clouds need to allow DTS to access through the public network, data migration through DTS is supported.

Note

If the PostgreSQL version is lower than 10.0, incremental migration is not supported.

Change tracking issues

What is the working principle of change tracking?

For the working principle of change tracking, see Product architecture and functional principles.

Will consumer groups be deleted after a change tracking task expires?

After DTS change tracking expires, data consumer groups will be retained for 7 days. If the instance expires for more than 7 days without renewal, it will be released, and the corresponding consumer groups will also be deleted.

Can a read-only instance be used as the source instance for a subscription task?

There are two scenarios:

  • If the instance is a read-only instance that records transaction logs (such as RDS MySQL 5.7 or 8.0 versions), it can be used as a source instance.

  • If the instance is a read-only instance that does not record transaction logs (such as RDS MySQL 5.6 version), it cannot be used as a source instance.

How do I consume subscribed data?

For details, see Consume subscribed data.

Why does the date data format change after using the change tracking feature to transmit data?

The default date data storage format for DTS is YYYY:MM:DD, while YYYY-MM-DD is the displayed format. The actual storage is in YYYY:MM:DD format. Therefore, regardless of which format the transmitted and written data is in, it will ultimately be converted to the default format.

How do I troubleshoot subscription task issues?

For methods to troubleshoot subscription tasks, see Troubleshoot subscription task issues.

How do I handle the SDK suddenly pausing during normal data download and being unable to subscribe to data?

Please check if the ackAsConsumed interface is called in the SDK code to report consumption positions. If ackAsConsumed is not called to report positions, the cached Record data in the SDK's internal settings will not be deleted. When the cache is fully occupied, new data cannot be pulled, resulting in the SDK pausing and being unable to subscribe to data.

How do I handle the SDK being unable to successfully subscribe to data after restarting?

Before starting the SDK, please first modify the consumption position to be within the data range. For methods to modify the consumption position, see Save and query consumption positions.

How can the client specify a time point for data consumption?

When consuming subscribed data, fill in the initCheckpoint parameter to specify a time point. For more information, see Use SDK sample code to consume subscribed data.

How do I reset the position when a DTS subscription task has accumulated data?

  1. Open the corresponding code file according to the SDK client's usage pattern. For example, DTSConsumerAssignDemo.java or DTSConsumerSubscribeDemo.java.

    Note
  2. In the subscription task list's Data Range column, check the modifiable range of the target subscription instance position.

  3. Select a new consumption position based on the actual situation and convert it to a Unix timestamp.

  4. Use the converted new consumption position to replace the old consumption position (initCheckpoint parameter) in the code file.

  5. Restart the client.

How do I handle being unable to connect to a subscription task's VPC address from the client?

It may be because the machine where the client is located is not in the VPC specified when configuring the subscription task (such as if the client's VPC has been changed). You need to reconfigure the task.

Why is the consumption position on the console larger than the maximum value of the data range?

Because the update frequency of the subscription channel's data range is 1 minute, while the update frequency of the consumption position is 10 seconds. Therefore, if consuming in real-time, the consumption position value may be larger than the maximum value of the subscription channel's data range.

How does DTS ensure that the SDK subscribes to complete transactions?

Based on the provided consumption position, the server will search for the complete transaction corresponding to this consumption position and start distributing data to the downstream from the BEGIN statement of the entire transaction, so you can receive the complete transaction content.

How can I confirm if data is being consumed normally?

If data is being consumed normally, the consumption position in the data transmission console will advance normally.

What does usePublicIp=true mean in the change tracking SDK?

Setting usePublicIp=true in the change tracking SDK configuration means that the SDK accesses the DTS subscription channel through the public network.

Will business be affected when the source database RDS undergoes primary/secondary switching or the primary database restarts for a change tracking task?

When RDS MySQL, RDS PostgreSQL, PolarDB MySQL, PolarDB PostgreSQL, and PolarDB-X 1.0 (with storage type as RDS MySQL) instances undergo primary/secondary switching or restart, DTS will adapt to the switch automatically, and business will not be affected.

Does RDS have a feature that can automatically download binlog to a local server?

DTS's change tracking supports real-time subscription to RDS Binlog logs. You can enable DTS's change tracking service and use DTS's SDK to subscribe to RDS Binlog data and synchronize it to a local server in real-time.

Does the real-time incremental data in change tracking only refer to new data, or does it include modified data?

The incremental data that DTS's change tracking can subscribe to includes: all additions, deletions, modifications, and structure changes (DDL).

Why does the SDK receive duplicate data after restarting when a record was not ACKed by the change tracking task consumer?

When the SDK has a message that has not been ACKed, the server will push all messages in the buffer to completion. After pushing is complete, the SDK cannot receive new messages. At this time, the consumption position saved by the server is the position of the last message before the unACKed message. When the SDK restarts, to ensure no messages are lost, the server will start pushing data again from the position corresponding to the message before the unACKed message, so the SDK will receive some duplicate messages at this time.

How often is the change tracking consumption position updated, and why does the SDK sometimes receive duplicate data when restarted?

The change tracking SDK must call ackAsConsumed to reply with an ACK to the server after consuming each message. After the server receives the ACK, it updates the consumption position in memory, and then persists the consumption position every 10 seconds. If the SDK is restarted when the latest ACK has not been persisted, to ensure no messages are lost, the server will start pushing messages from the last persisted consumption position, at which point the SDK will receive duplicate messages.

Can one change tracking instance subscribe to multiple RDS instances?

No, currently one change tracking instance can only subscribe to one RDS instance.

Will change tracking instances have data inconsistency?

No, change tracking tasks only obtain changes from the source database and do not involve data inconsistency. If the data consumed by the client is inconsistent with your expectations, please troubleshoot on your own.

How do I handle UserRecordGenerator when consuming subscribed data?

When consuming subscribed data, if you encounter messages like UserRecordGenerator: haven't receive records from generator for 5s, you need to check if the consumption position is within the position range of the incremental data collection module, and ensure that the consumer is running normally.

Does a topic support creating multiple partitions?

No. To ensure global message ordering, each subscription Topic has only one partition, which is fixed and allocated to partition 0.

Does the change tracking SDK support the Go language?

Yes, for sample code, see dts-subscribe-demo.

Does the change tracking SDK support the Python language?

Yes, for sample code, see dts-subscribe-demo.

Does flink-dts-connector support multi-threaded concurrent consumption of subscribed data?

No.

Other issues

What impact will modifying destination database data have during data synchronization or migration tasks?

  • Modifying data in the destination database may cause DTS tasks to fail. During data migration or synchronization, if operations are performed on objects to be migrated or synchronized in the destination database, it may lead to primary key conflicts, no update records, and other situations, ultimately causing DTS tasks to fail. However, operations that will not cause DTS task interruption can be performed, such as creating a table in the destination instance and writing to it, because it is not in the table migration or synchronization object, so it will not cause DTS to fail.

  • Since DTS reads information from the source instance database and migrates or synchronizes its full data, structure data, and incremental data to the destination instance, modifying data in the destination database during the task may be overwritten by data migrated or synchronized from the source database.

Can data be written to both the source and destination databases simultaneously during data synchronization or migration tasks?

Yes, but during the operation of DTS instances, if there are data sources other than DTS writing to the destination database, it may cause destination database data or DTS instance exceptions.

What happens if the password of the source or destination database is modified while a DTS instance is running?

The DTS instance will report an error and be interrupted. You can click on the instance ID to enter the instance details, modify the account password of the source or destination in the Basic Information tab. Then go to the Task Management tab for the error instance progress module, and restart that module in Basic Information.

Why don't some source or destination databases have public IP as a connection method?

It depends on the connection method of the source or destination database, the task type, and the database type. For example, for MySQL database type sources, migration and subscription tasks can select public IP access, while synchronization tasks do not support public IP access.

Is cross-account data migration or data synchronization supported?

Yes. For configuration methods, see Configure cross-Alibaba Cloud account tasks.

Can the source and destination databases be the same database instance?

Yes. If your source and destination databases are the same database instance, we recommend using the mapping function to isolate and distinguish the data, otherwise, it may cause DTS instance failure or data loss. For more information, see Database, table, and column name mapping.

Why do tasks with Redis as the destination database report the error OOM command not allowed when used memory > 'maxmemory'?

It may be because the destination Redis instance's storage space is insufficient. If the architecture type of the destination Redis instance is cluster edition, it may also be because a certain shard has reached its memory limit. You need to upgrade the specification of the destination instance.

What is the AliyunDTSRolePolicy permission policy and what is it used for?

The AliyunDTSRolePolicy policy is used to access cloud resources such as RDS and ECS under the current account or across accounts. It can call relevant cloud resource information when configuring data migration, synchronization, or subscription tasks. For more information, see Grant DTS permission to access cloud resources.

How do I perform RAM role authorization?

When you log in to the console for the first time, DTS will require you to authorize the AliyunDTSDefaultRole role. Please follow the console prompts to jump to the RAM authorization page for authorization. For more information, see Grant DTS permission to access cloud resources.

Important

You need to use an Alibaba Cloud account (primary account) to log in to the console for this operation.

Can the account password filled in for DTS tasks be modified?

The database account password filled in for DTS tasks can be modified. You can click on the instance ID to enter the instance details, and in the Basic Information tab, click Modify Password to modify the account password of the source or destination.

Important

The system account password for DTS tasks cannot be modified.

Why do MaxCompute tables have a base suffix?

  1. Initial schema synchronization.

    DTS synchronizes the schemas of the required objects from the source database to MaxCompute. During initial schema synchronization, DTS adds the _base suffix to the end of the source table name. For example, if the name of the source table is customer, the name of the table in MaxCompute is customer_base.

  2. Initial full data synchronization.

    DTS synchronizes the historical data of the table from the source database to the destination table in MaxCompute. For example, the customer table in the source database is synchronized to the customer_base table in MaxCompute. The data is the basis for subsequent incremental synchronization.

    Note

    The destination table that is suffixed with _base is known as a full baseline table.

  3. Incremental data synchronization.

    DTS creates an incremental data table in MaxCompute. The name of the incremental data table is suffixed with _log, such as customer_log. Then, DTS synchronizes the incremental data that was generated in the source database to the incremental data table.

    Note

    For more information, see Schema of an incremental data table.

How do I handle being unable to get Kafka topics?

It may be because the currently configured Kafka Broker does not have topic information. Please use the following command to check the Broker distribution of topics:

./bin/kafka-topics.sh --describe --zookeeper zk01:2181/kafka --topic topic_name

Is it possible to set up a MySQL instance locally as a slave to an RDS instance?

Yes, you can use the Data Transmission Service (DTS) data migration feature to configure real-time data synchronization from RDS to a local self-managed MySQL instance, achieving a master-slave architecture.

How can I copy data from an RDS instance to a newly created RDS instance?

You can use the DTS data migration feature, selecting schema migration, full migration, and incremental migration as the migration types for the migration task. For configuration methods, see Data migration between RDS instances.

Does DTS support creating a copy of a database with the same structure but a different name within an RDS instance?

Yes, the object name mapping feature provided by DTS can create a copy of a database with the same structure but a different name within an RDS instance.

How do I handle DTS instances that always show latency?

Possible reasons include the following:

  • Multiple DTS tasks have been created for the source database instance using different accounts, causing the instance's load to be too high. Please use the same account to create tasks.

  • The destination database instance's memory is insufficient. Please restart the destination database instance after making appropriate business arrangements. If the problem cannot be resolved, please upgrade the destination instance specification or perform a primary/secondary switch.

    Note

    Network disconnections may occur during the primary/secondary switch process. Please ensure that your application has an automatic reconnection mechanism.

How do I handle fields becoming lowercase after synchronization or migration to the destination database in the old console?

Please use the new console to configure tasks and use the destination database object name case policy feature. For more information, see Destination database object name case policy.

Can DTS tasks be restored after being paused?

In general, DTS tasks that have been paused for less than 24 hours can be restored normally; if the data volume is small, DTS tasks that have been paused for less than 7 days can be restored normally. It is recommended that the pause time not exceed 6 hours.

Why does the progress start from 0 after a task is restarted following a pause?

After a task is restarted, DTS will re-query the data that has already been completed, and then continue processing the remaining data. During this process, the task progress may differ from the actual progress due to latency.

What is the principle of lockless DDL changes?

For the main principle of lockless DDL changes, see Main principle.

Does DTS support pausing synchronization or migration for a specific table?

No.

Do I need to repurchase if a task fails?

No, you can reconfigure on the original task.

What happens if multiple tasks write to the same destination?

It may lead to data inconsistency.

Why is an instance still in a locked state after renewal?

After renewing a locked DTS instance, it takes some time for the instance to be unlocked. Please be patient.

Do DTS instances support modifying resource groups?

Yes. You can enter the Basic Information page of the instance, and in the Basic Information area, click Modify after Resource Group Name to make changes.

Does DTS have a binlog analysis tool?

DTS does not have a Binlog analysis tool.

Is it normal for an incremental task to always show 95%?

Yes. Incremental tasks are continuous and will not complete, so the progress will not reach 100%.

Why hasn't a DTS task been released after more than 7 days?

Occasionally, frozen tasks may be kept for more than 7 days.

Do tasks that have already been created support modifying ports?

No.

Can the RDS MySQL instances mounted under PolarDB-X 1.0 in a DTS task be downgraded?

Downgrading is not recommended, as it will trigger a primary/secondary switch, which may lead to data loss.

Can source or destination instance specifications be upgraded or downgraded during DTS task operation?

During DTS task operation, upgrading or downgrading source or destination instance specifications may cause task latency or data loss. It is not recommended to change the specifications of source or destination instances.

What impact do DTS tasks have on source and destination instances?

During initial full data synchronization, a certain amount of read and write resources will be occupied on the source and destination databases, which may cause the database load to increase. It is recommended to execute full tasks during business off-peak hours.

What is the approximate latency of DTS tasks?

The latency time of DTS tasks cannot be estimated because latency is limited by multiple factors such as the source instance's operating load, transmission network bandwidth, network latency, destination instance write performance, and more. Based on DTS's practical experience, these impacts are negligible.

If the data transmission console automatically jumps to the Data Management (DMS) console, how do I return to the old version of the data transmission console?

You can click jiqiren in the lower right corner of the Data Management (DMS) console, then click 返回旧版 to return to the old version of the data transmission console.

Does DTS support data encryption?

DTS supports securely accessing source or destination databases through SSL encrypted connections to read data from the source database or write data to the destination database, but does not support data transmission encryption during the data transmission process.

Does DTS support ClickHouse as a source or destination?

No.

Does DTS support AnalyticDB for MySQL 2.0 as a source or destination?AnalyticDB for MySQL

AnalyticDB for MySQL 2.0 can only be used as a destination. Solutions that use AnalyticDB for MySQL 2.0 as the destination are not available in the new console. You can configure these solutions only in the old console.

Why can't I see newly created tasks in the console?

You may have selected the wrong task list or filtered the tasks. Please select the correct filtering options in the corresponding task list, such as selecting the correct region and resource group in the corresponding task list.资源组

What are the reasons for data inconsistency in data verification tasks?

Common reasons include the following:

  1. The migration or synchronization task has latency.

  2. The source database executed an add column operation with default values, and the task has latency.

  3. There is data written to the destination database other than from DTS.

  4. DDL operations were executed on the source database of a task with multi-table consolidation enabled.

  5. The migration or synchronization task used the database, table, and column name mapping feature.

Can configuration items that are grayed out in created tasks be modified?

No.

How do I configure latency alarms and thresholds?

DTS provides monitoring and alerting functions. You can set alert rules for important monitoring metrics through the console to keep you informed of the running status in a timely manner. For configuration methods, see Configure monitoring alerts.

Can I view the reason for failure for tasks that have been failing for a long time?

No. If a task has been failing for a long time (such as failing for more than 7 days), the relevant logs will be cleared, making it impossible to view the reason for failure.

Can tasks that have been failing for a long time be restored?

No. If a task has been failing for a long time (such as failing for more than 7 days), the relevant logs will be cleared, making it impossible to restore. You need to reconfigure the task.

What is the rdsdt_dtsacct account?

If you have not created the rdsdt_dtsacct account, it may have been created by DTS. DTS creates an internal account rdsdt_dtsacct in some database instances for connecting to source and destination database instances.

How do I view information about heap tables, tables without primary keys, compressed tables, and tables with computed columns in SQL Server?

You can execute the following SQL to check if these types of tables exist in the source database:

  1. Execute the following SQL statement to check for heap tables:

    SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.indexes WHERE index_id = 0);
  2. Execute the following SQL statement to check for tables without primary keys:

    SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id NOT IN (SELECT parent_object_id FROM sys.objects WHERE type = 'PK');
  3. Execute the following SQL statement to check for primary key columns that are not contained in clustered index columns:

    SELECT s.name schema_name, t.name table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id WHERE t.type = 'U' AND s.name NOT IN('cdc', 'sys') AND t.name NOT IN('systranschemas') AND t.object_id IN ( SELECT pk_colums_counter.object_id AS object_id FROM (select pk_colums.object_id, sum(pk_colums.column_id) column_id_counter from (select sic.object_id object_id, sic.column_id FROM sys.index_columns sic, sys.indexes sis WHERE sic.object_id = sis.object_id AND sic.index_id = sis.index_id AND sis.is_primary_key = 'true') pk_colums group by object_id) pk_colums_counter inner JOIN ( select cluster_colums.object_id, sum(cluster_colums.column_id) column_id_counter from (SELECT sic.object_id object_id, sic.column_id FROM sys.index_columns sic, sys.indexes sis WHERE sic.object_id = sis.object_id AND sic.index_id = sis.index_id AND sis.index_id = 1) cluster_colums group by object_id ) cluster_colums_counter ON pk_colums_counter.object_id = cluster_colums_counter.object_id and pk_colums_counter.column_id_counter != cluster_colums_counter.column_id_counter);
  4. Execute the following SQL statement to check for compressed tables:

    SELECT s.name AS schema_name, t.name AS table_name FROM sys.objects t, sys.schemas s, sys.partitions p WHERE s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id = p.object_id AND p.data_compression != 0;
  5. Execute the following SQL statement to check for tables with computed columns:

    SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.columns WHERE is_computed = 1);

How do I handle inconsistent structures between source and destination?

You can try using the mapping feature to establish mapping relationships between columns in the source and destination. For more information, see Database, table, and column name mapping.

Note

Modifying column types is not supported.

Does database, table, and column mapping support modifying column types?

No.

Does DTS support limiting the read speed from the source database?

No. You need to evaluate the performance of the source database (such as whether IOPS and network bandwidth meet requirements) before running the task, and it is recommended to run tasks during business off-peak hours.

How do I clean up orphaned documents in MongoDB (sharded cluster architecture)?

Check if orphaned documents exist

  1. Connect to the MongoDB sharded cluster instance through Mongo Shell.

    For connection methods to ApsaraDB for MongoDB, see Connect to a MongoDB sharded cluster instance through Mongo Shell.

  2. Execute the following command to switch to the target database.

    use <db_name>
  3. Execute the following command to view orphaned document information.

    db.<coll_name>.find().explain("executionStats")
    Note

    Check the chunkSkips field in the SHARDING_FILTER stage of executionStats for each shard. If it is not 0, it indicates that there are orphaned documents on the corresponding shard.

    The following return example indicates: In the FETCH stage before the SHARDING_FILTER stage, 102 documents were returned ("nReturned" : 102), then 2 orphaned documents were filtered in the SHARDING_FILTER stage ("chunkSkips" : 2), and finally 100 documents were returned ("nReturned" : 100).

    "stage" : "SHARDING_FILTER",
    "nReturned" : 100,
    ......
    "chunkSkips" : 2,
    "inputStage" : {
        "stage" : "FETCH",
        "nReturned" : 102,

    For more information about the SHARDING_FILTER stage, see MongoDB Manual.

Clean up orphaned documents

Important

If you have multiple databases, you need to clean up orphaned documents for each database.

ApsaraDB for MongoDB instances
Self-managed MongoDB databases
Note

An error occurs if a cleanup script is executed to delete orphaned documents from an ApsaraDB for MongoDB instance whose major version is earlier than 4.2 or an ApsaraDB for MongoDB instance whose minor version is earlier than 4.0.6. For information about how to view the current version of an ApsaraDB for MongoDB instance, see MongoDB minor versions. For information about how to update the minor version or major version of an ApsaraDB for MongoDB instance, see Upgrade the major version of an instance and Update the minor version of an instance.

The cleanupOrphaned command is required to delete orphaned documents. The method of running this command varies based on the version of the MongoDB database.

MongoDB 4.4 and later
MongoDB 4.2 and earlier
  1. Create a JavaScript script file named cleanupOrphaned.js on a server that can connect to the sharded cluster instance.

    Note

    This script is used to delete orphaned documents from all collections in multiple databases in multiple shards. If you want to delete orphaned documents from a specific collection, you can modify some of the parameters in the script file.

    // The names of shards.
    var shardNames = ["shardName1", "shardName2"];
    // The databases from which you want to delete orphaned documents.
    var databasesToProcess = ["database1", "database2", "database3"];
    
    shardNames.forEach(function(shardName) {
        // Traverse the specified databases.
        databasesToProcess.forEach(function(dbName) {
            var dbInstance = db.getSiblingDB(dbName);
            // Obtain the names of all collections of the specified databases.
            var collectionNames = dbInstance.getCollectionNames();
            
            // Traverse all collections.
            collectionNames.forEach(function(collectionName) {
                // The complete collection name.
                var fullCollectionName = dbName + "." + collectionName;
                // Build the cleanupOrphaned command.
                var command = {
                    runCommandOnShard: shardName,
                    command: { cleanupOrphaned: fullCollectionName }
                };
    
                // Run the cleanupOrphaned command.
                var result = db.adminCommand(command); 
                if (result.ok) {
                    print("Cleaned up orphaned documents for collection " + fullCollectionName + " on shard " + shardName);
                    printjson(result);
                } else {
                    print("Failed to clean up orphaned documents for collection " + fullCollectionName + " on shard " + shardName);
                }
            });
        });
    });

    You must modify the shardNames and databasesToProcess parameters in the script file. The following content describes the two parameters:

    • shardNames: the IDs of the shards from which you want to delete orphaned documents. You can view the IDs in the Shard List section on the Basic Information page of the sharded cluster instance. Example: d-bp15a3796d3a****.

    • databasesToProcess: the names of the databases from which you want to delete orphaned documents.

  2. Run the following command in the directory in which the cleanupOrphaned.js script file is stored:

    mongo --host <Mongoshost> --port <Primaryport>  --authenticationDatabase <database> -u <username> -p <password> cleanupOrphaned.js > output.txt

    The following table describes the parameters that you can configure.

    Parameter

    Description

    <Mongoshost>

    The endpoint of the mongos node of the sharded cluster instance. Format: s-bp14423a2a51****.mongodb.rds.aliyuncs.com.

    <Primaryport>

    The port number of the mongos node of the sharded cluster instance. Default value: 3717.

    <database>

    The name of the database to which the database account belongs.

    <username>

    The database account.

    <password>

    The password of the database account.

    output.txt

    The output.txt file that is used to store execution results.

  1. Create a JavaScript script file named cleanupOrphaned.js on a server that can connect to the sharded cluster instance.

    Note

    This script is used to delete orphaned documents from a specific collection in a database in multiple shards. If you want to delete orphaned documents from multiple collections, you can modify the fullCollectionName parameter in the script file and run the script multiple times. Alternatively, you can modify the script file to traverse all collections.

    function cleanupOrphanedOnShard(shardName, fullCollectionName) {
        var nextKey = { };
        var result;
    
        while ( nextKey != null ) {
            var command = {
                runCommandOnShard: shardName,
                command: { cleanupOrphaned: fullCollectionName, startingFromKey: nextKey }
            };
    
            result = db.adminCommand(command);
            printjson(result);
    
            if (result.ok != 1 || !(result.results.hasOwnProperty(shardName)) || result.results[shardName].ok != 1 ) {
                print("Unable to complete at this time: failure or timeout.")
                break
            }
    
            nextKey = result.results[shardName].stoppedAtKey;
        }
    
        print("cleanupOrphaned done for coll: " + fullCollectionName + " on shard: " + shardName)
    }
    
    var shardNames = ["shardName1", "shardName2", "shardName3"]
    var fullCollectionName = "database.collection"
    
    shardNames.forEach(function(shardName) {
        cleanupOrphanedOnShard(shardName, fullCollectionName);
    });

    You must modify the shardNames and fullCollectionName parameters in the script file. The following content describes the two parameters:

    • shardNames: the IDs of the shards from which you want to delete orphaned documents. You can view the IDs in the Shard List section on the Basic Information page of the sharded cluster instance. Example: d-bp15a3796d3a****.

    • fullCollectionName: You must replace this parameter with the name of the collection from which you want to delete orphaned documents. Format: database name.collection name.

  2. Run the following command in the directory in which the cleanupOrphaned.js script file is stored:

    mongo --host <Mongoshost> --port <Primaryport>  --authenticationDatabase <database> -u <username> -p <password> cleanupOrphaned.js > output.txt

    The following table describes the parameters that you can configure.

    Parameter

    Description

    <Mongoshost>

    The endpoint of the mongos node of the sharded cluster instance. Format: s-bp14423a2a51****.mongodb.rds.aliyuncs.com.

    <Primaryport>

    The port number of the mongos node of the sharded cluster instance. Default value: 3717.

    <database>

    The name of the database to which the database account belongs.

    <username>

    The database account.

    <password>

    The password of the database account.

    output.txt

    The output.txt file that is used to store execution results.

  1. Download the cleanupOrphaned.js script file on a server that can connect to the self-managed MongoDB database.

    wget "https://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/attach/120562/cn_zh/1564451237979/cleanupOrphaned.js"
  2. Replace test in the cleanupOrphaned.js file with the name of the database from which you want to delete orphaned documents.

    Important

    If you want to delete orphaned documents from multiple databases, repeat Step 2 and Step 3.

  3. Run the following command on a shard to delete the orphaned documents from all collections in the specified database:

    Note

    You must repeat this step for each shard.

    mongo --host <Shardhost> --port <Primaryport>  --authenticationDatabase <database> -u <username> -p <password> cleanupOrphaned.js
    Note
    • <Shardhost>: the IP address of the shard.

    • <Primaryport>: the service port of the primary node in the shard.

    • <database>: the name of the database to which the database account belongs.

    • <username>: the account that is used to log on to the self-managed MongoDB database.

    • <password>: the password that is used to log on to the self-managed MongoDB database.

    Example:

    In this example, a self-managed MongoDB database has three shards, and you must delete the orphaned documents from each shard.

    mongo --host 172.16.1.10 --port 27018  --authenticationDatabase admin -u dtstest -p 'Test123456' cleanupOrphaned.js
    mongo --host 172.16.1.11 --port 27021 --authenticationDatabase admin -u dtstest -p 'Test123456' cleanupOrphaned.js
    mongo --host 172.16.1.12 --port 27024  --authenticationDatabase admin -u dtstest -p 'Test123456' cleanupOrphaned.js

Exception handling

If idleCursors exist on the namespace corresponding to the orphaned documents, they may prevent the cleanup process from completing. In this case, you will find the following information in the mongod logs for the corresponding orphaned documents:

Deletion of DATABASE.COLLECTION range [{ KEY: VALUE1 }, { KEY: VALUE2 }) will be scheduled after all possibly dependent queries finish

You can connect to mongod through Mongo Shell and execute the following command to check if there are idleCursors on the current shard. If they exist, you need to clean up all idleCursors by using the restart mongod or killCursors command, and then clean up the orphaned documents again. For more information, see JIRA ticket.

db.getSiblingDB("admin").aggregate( [{ $currentOp : { allUsers: true, idleCursors: true } },{ $match : { type: "idleCursor" } }] )

How to handle uneven data distribution in MongoDB with sharded cluster architecture?

Enabling the Balancer feature and pre-sharding can effectively solve the problem of most data being written to a single shard (data skew).

Enable balancer

If the Balancer is in a closed state or has not yet reached the time period set by the Balancer window, you can enable or temporarily cancel the Balancer's window period to start data balancing immediately.

  1. Connect to the MongoDB sharded cluster instance.

  2. In the mongos node command window, switch to the config database.

    use config
  3. Execute the following command according to the actual situation.

    • Enable Balancer feature

      sh.setBalancerState(true)
    • Temporarily cancel Balancer's window period

      db.settings.updateOne( { _id : "balancer" }, { $unset : { activeWindow : true } } )

Pre-sharding

MongoDB supports two sharding methods: range sharding and hash sharding. Pre-sharding can make Chunk values as dispersed as possible across multiple Shard nodes, thereby achieving load balancing during the DTS data synchronization or migration process.

Hash sharding
Range sharding

Use the numInitialChunks parameter to easily implement pre-sharding. The default value is number of shards × 2, and the maximum can be set to number of shards × 8192. For more information, see sh.shardCollection().

sh.shardCollection("phonebook.contacts", { last_name: "hashed" }, false, {numInitialChunks: 16384})
  • If the source MongoDB is also a sharded cluster architecture, you can obtain the Chunk range of the corresponding sharded table from config.chunks and use it as a reference for the <split_value> value in subsequent pre-sharding commands.

  • If the source MongoDB is a replica set, you can only determine the specific range of the shard key through the find command, and then design reasonable split points.

    # Get the minimum value of the shard key
    db.<coll>.find().sort({<shardKey>:1}).limit(1)
    # Get the maximum value of the shard key
    db.<coll>.find().sort({<shardKey>:-1).limit(1)

Command format

Note

Taking the splitAt command as an example, for more information, see sh.splitAt(), sh.splitFind(), Split Chunks in a Sharded Cluster.

sh.splitAt("<db>.<coll>", {"<shardKey>":<split_value>})

Example statements

sh.splitAt("test.test", {"id":0})
sh.splitAt("test.test", {"id":50000})
sh.splitAt("test.test", {"id":75000})

After completing the pre-sharding operation, you can execute the sh.status() command on the mongos node to confirm the effect of pre-sharding.

How to set the number of instances displayed per page in the task list in the console?

Note

This operation uses synchronization instances as an example.

  1. Use one of the following methods to go to the Data Synchronization page and select the region in which the data synchronization instance resides.

    DTS console
    DMS console
    1. Log on to the DTS console.

    2. In the left-side navigation pane, click Data Synchronization.

    3. In the upper-left corner of the page, select the region in which the data synchronization instance resides.

    Note

    The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode and Customize the layout and style of the DMS console.

    1. Log on to the DMS console.

    2. In the top navigation bar, move the pointer over Data + AI and choose DTS (DTS) > Data Synchronization.

    3. From the drop-down list to the right of Data Synchronization Tasks, select the region in which the data synchronization instance resides.

  2. On the right side of the page, drag the scroll bar to the bottom of the page.

  3. In the lower-right corner of the page, select Display Per Page.

    Note

    Display Per Page only supports selection of 10, 20, or 50.

How to handle DTS instance ZooKeeper connection timeout?

You can try restarting the instance to see if it can be restored. For restart operations, see Start a DTS instance.

Why is the DTS network segment automatically added back after being deleted in CEN?

This may occur because you used the basic edition transit router of Cloud Enterprise Network (CEN) to connect the database to DTS. If you create a DTS instance using this database, DTS will automatically add the server's IP address ranges to the corresponding router, even if you delete the DTS network segment in CEN.

Do DTS tasks support export?

No, they do not.

How to use Java language to call OpenAPI?

The method of calling OpenAPI with Java language is similar to Python language. Please refer to Python SDK call example. You can enter the Data Transmission Service DTS SDK page, select the target programming language in All Languages, and view the sample code.

How to use API to configure ETL functionality for synchronization or migration tasks?

You can configure ETL functionality through common parameters (such as etlOperatorCtl and etlOperatorSetting) in the Reserve parameter of the API interface. For more information, see ConfigureDtsJob and Reserve parameter description.

Does DTS support Azure SQL database?

Yes, DTS supports Azure SQL Database. When Azure SQL Database is used as the source database, you need to set SQL Server Incremental Synchronization Mode to Polling and querying CDC instances for incremental synchronization.

Will source database data be retained after DTS synchronization or migration ends?

Yes, it will. DTS does not delete data from the source database. If you do not want to retain the data in the source database, you need to delete it manually.

Do synchronization or migration instances support rate adjustment after running?

Yes, they do. For more information, see Adjust migration rate.

Does DTS support sampling synchronization or migration of data by time period?

No, this is not supported.

When synchronizing or migrating data, do I need to manually create data tables in the destination database?

For DTS instances that support schema tasks (schema synchronization or schema migration), you do not need to manually create data tables in the destination database if you have configured a schema task (Synchronization Types has Schema Synchronization checked or Migration Types has Schema Migration checked).

When synchronizing or migrating data, do the networks of the source and destination databases need to be connected?

No, they do not.

  • On this page (1)
  • Problem categories
  • Billing issues
  • How is DTS billed?
  • How can I view DTS bills?
  • Will I still be charged when an instance is paused?
  • Why is data synchronization more expensive than data migration?
  • What happens when an account has overdue payments?
  • How can I release a subscription task early?
  • Can a subscription task be converted to pay-as-you-go?
  • Can a pay-as-you-go task be converted to subscription?
  • Why did DTS tasks suddenly start charging?
  • Why am I still being charged for a task that has been released?
  • How does pay-as-you-go billing work?
  • Does DTS charge for data transfer?
  • Performance and specification issues
  • What are the differences between different instance specifications?
  • Is it possible to upgrade instance specifications?
  • Is it possible to downgrade instance specifications?
  • Is it possible to downgrade instance specifications?
  • How long does it take to synchronize or migrate data?
  • How can I view the performance information of data migration or data synchronization tasks?
  • Why can't I find the specified DTS instance in the console?
  • Precheck issues
  • Why is there a warning for the Redis eviction policy check item?
  • How do I handle binlog-related precheck failures during incremental data migration?
  • Database connection issues
  • How do I handle source database connection failures?
  • How do I handle destination database connection failures?
  • How can I perform data migration and synchronization when the source or destination instance is in a region not supported by DTS?
  • Data synchronization issues
  • Which database instances does DTS support for synchronization?
  • What is the difference between data migration and data synchronization?
  • What is the working principle of data synchronization?
  • How is synchronization latency calculated?
  • Can synchronization objects be modified for data synchronization tasks?
  • Can new tables be added for synchronization in data synchronization tasks?
  • How can I modify synchronization objects such as tables and fields in a running synchronization task?
  • Will pausing a synchronization task and restarting it after a period of time cause data inconsistency?
  • If data is deleted in the source database of an incremental synchronization task, will the synchronized data in the destination database be deleted?
  • For synchronization between Redis instances, will the data in the destination Redis instance be overwritten?
  • Does the synchronization task support filtering certain fields or data?
  • Can a synchronization task be converted to a migration task?
  • Is it possible to synchronize only data without synchronizing structure?
  • What are the possible reasons for data inconsistency between the source and destination of a data synchronization instance?
  • Can the name of the source database be modified in the destination database for data synchronization tasks?
  • Is real-time synchronization of DML or DDL operations supported?
  • Can a read-only instance be used as the source instance for a synchronization task?
  • Does DTS support data synchronization for sharded databases and tables?
  • Why is the data volume in the destination instance smaller than in the source instance after the synchronization task ends?
  • Does a cross-account data synchronization task support two-way synchronization?
  • Does DTS support cross-border two-way synchronization tasks?
  • Why is a record added to one database in a two-way synchronization task not added to the other database?
  • Why does the incremental display of a synchronization task never reach 100%?
  • Why can't an incremental synchronization task synchronize data?
  • When synchronizing full data from an RDS database, will it affect the performance of the source RDS?
  • Why doesn't a synchronization instance with PolarDB-X 1.0 as the source display latency?
  • Why does a multi-table consolidation task report error DTS-071001?
  • How do I handle whitelist addition failures when configuring tasks in the old console?
  • How do I handle task failures caused by DDL operations on the source database during DTS data synchronization?
  • How do I handle task failures caused by DDL operations on the destination database during DTS data synchronization?
  • Can a synchronization task be restored after it is released? Can reconfiguring the task ensure data consistency?
  • What should I do if a DTS full synchronization task has no progress for a long time?
  • When synchronizing tables with the same name, is it possible to only transmit source table data when it does not exist in the destination table?
  • How do I configure a cross-account synchronization task?
  • How do I handle being unable to select a DMS LogicDB instance?
  • Does a synchronization task with SQL Server as the source support synchronizing functions?
  • How do I handle data synchronization task errors?
  • How do I enable hot spot merging for a synchronization task?
  • How do I perform synchronization when the source database has triggers?
  • Does DTS support synchronizing the sys library and system libraries?
  • Does DTS support synchronizing MongoDB's admin and local libraries?
  • When can the reverse task of a two-way synchronization task be configured?
  • When PolarDB-X 1.0 is the source, does the synchronization task's source PolarDB-X 1.0 support node expansion or contraction?
  • Can DTS ensure the uniqueness of data synchronized to Kafka?
  • Does DTS data synchronization support RDS MySQL to AnalyticDB for MySQL?AnalyticDB for MySQL
  • Why doesn't a synchronization task between Redis instances display full synchronization?
  • Can full synchronization be skipped?
  • Does DTS support scheduled automatic synchronization?
  • Will table fragment space also be synchronized during the synchronization process?
  • What should I be aware of when synchronizing from MySQL 8.0 to MySQL 5.6?
  • Can accounts from the source database be synchronized to the destination database?
  • Can I configure a cross-account two-way synchronization task?
  • How do I configure parameters when Message Queue for Apache Kafka is the destination?
  • Data migration issues
  • After executing a data migration task, does the data in the source database still exist?
  • Which database instances does DTS support for migration?
  • What is the working principle of data migration?
  • Can migration objects be modified for data migration tasks?
  • Can new tables be added for migration in data migration tasks?
  • How can I modify migration objects such as tables and fields in a running migration task?
  • Will pausing a migration task and restarting it after a period of time cause data inconsistency?
  • Can a migration task be converted to a synchronization task?
  • Is it possible to migrate only data without migrating structure?
  • What are the possible reasons for data inconsistency between the source and destination of a data migration instance?
  • Can the name of the source database be modified in the destination database for data migration tasks?
  • Is data migration within the same instance supported?
  • Is real-time migration of DML or DDL operations supported?
  • Can a read-only instance be used as the source of a migration task?
  • Does DTS support data migration for sharded databases and tables?
  • Does the migration task support filtering certain fields or data?
  • Why is the data volume in the destination instance smaller than in the source instance after the migration task ends?
  • Why does the completed value displayed for a migration task exceed the total?
  • What is the purpose of the increment_trx table added to the destination database during data migration?
  • Does the data migration task support breakpoint continuation during the full migration phase?
  • How do I migrate non-Alibaba Cloud instances to Alibaba Cloud?
  • How do I migrate a local Oracle database to PolarDB?
  • Can a data migration task that has not completed the full migration phase be paused?
  • How do I migrate partial data from RDS MySQL to a self-managed MySQL?
  • How do I migrate between RDS instances under the same Alibaba Cloud account?
  • How can I ensure source database business stability when IOPS alarms occur in the source database after a migration task starts?
  • Why can't a database named test be selected for data migration tasks?
  • Why doesn't a migration instance with PolarDB-X 1.0 as the source display latency?
  • Why can't DTS migrate MongoDB databases?
  • Why does a multi-table consolidation task report error DTS-071001?
  • How do I handle whitelist addition failures when configuring tasks in the old console?
  • How do I handle task failures caused by DDL operations on the source database during DTS data migration?
  • How do I handle task failures caused by DDL operations on the destination database during DTS data migration?
  • Can a migration task be restored after it is released? Can reconfiguring the task ensure data consistency?
  • What should I do if a DTS full migration task has no progress for a long time?
  • When migrating tables with the same name, is it possible to only transmit source table data when it does not exist in the destination table?
  • How do I configure a cross-account migration task?
  • How do I connect to a local database for a data migration task?
  • How do I handle data migration failure with error DTS-31008?
  • How do I handle network connectivity issues when accessing self-managed databases through dedicated lines?
  • Does a migration task with SQL Server as the source support migrating functions?
  • How do I handle slow DTS full migration speed?
  • How do I handle schema migration errors?
  • Are schema migration and full migration charged?
  • For data migration tasks between Redis instances, will the zset data in the destination be overwritten?
  • What impact does full migration have on the source database?
  • When PolarDB-X 1.0 is the source, does the migration task's source PolarDB-X 1.0 support node expansion or contraction?
  • Can DTS ensure the uniqueness of data migrated to Kafka?
  • Will data inconsistency occur if I first configure a full migration task and then configure an incremental data migration task?
  • Do I need to check schema migration when configuring an incremental migration task?
  • Why does the storage space used by RDS become larger than the source database during migration from a self-managed database to RDS?
  • Does DTS support migration of MongoDB in VPC networks?
  • What will happen to the migration data if the source database changes during data migration?
  • Will releasing a completed migration task affect the use of the migrated database?
  • Does DTS support MongoDB incremental migration?
  • What is the difference between using an RDS instance and a self-managed database instance accessed through a public IP as the source instance for a migration task?
  • Does DTS support migrating self-managed databases on ECS in VPC to RDS instances?
  • Does DTS lock tables during migration? Does it affect the source database?
  • Does DTS pull data from the primary or secondary database of RDS during RDS migration?
  • Does DTS support scheduled automatic migration?
  • Does DTS support data migration for RDS instances in VPC mode?
  • When DTS performs same-account or cross-account migration and synchronization, does it use internal network or public network for ECS and RDS instances? Are there traffic fees?
  • When using DTS for data migration, will the data in the source database be deleted after migration?
  • When DTS performs data migration between RDS instances, can the name of the migration destination database be specified?
  • How do I handle DTS migration tasks that cannot connect to ECS instances as the source?
  • Why doesn't a migration task between Redis instances display full migration?
  • Can full migration be skipped?
  • Does the cluster version of Redis support accessing DTS through a public IP?
  • What should I be aware of when migrating from MySQL 8.0 to MySQL 5.6?
  • Can accounts from the source database be migrated to the destination database?
  • How do I configure parameters when Message Queue for Apache Kafka is the destination?
  • How do I perform scheduled full migration?
  • Is it possible to migrate SQL Server from ECS to a local self-managed SQL Server?
  • Is migration of PostgreSQL databases from other clouds supported?
  • Change tracking issues
  • What is the working principle of change tracking?
  • Will consumer groups be deleted after a change tracking task expires?
  • Can a read-only instance be used as the source instance for a subscription task?
  • How do I consume subscribed data?
  • Why does the date data format change after using the change tracking feature to transmit data?
  • How do I troubleshoot subscription task issues?
  • How do I handle the SDK suddenly pausing during normal data download and being unable to subscribe to data?
  • How do I handle the SDK being unable to successfully subscribe to data after restarting?
  • How can the client specify a time point for data consumption?
  • How do I reset the position when a DTS subscription task has accumulated data?
  • How do I handle being unable to connect to a subscription task's VPC address from the client?
  • Why is the consumption position on the console larger than the maximum value of the data range?
  • How does DTS ensure that the SDK subscribes to complete transactions?
  • How can I confirm if data is being consumed normally?
  • What does usePublicIp=true mean in the change tracking SDK?
  • Will business be affected when the source database RDS undergoes primary/secondary switching or the primary database restarts for a change tracking task?
  • Does RDS have a feature that can automatically download binlog to a local server?
  • Does the real-time incremental data in change tracking only refer to new data, or does it include modified data?
  • Why does the SDK receive duplicate data after restarting when a record was not ACKed by the change tracking task consumer?
  • How often is the change tracking consumption position updated, and why does the SDK sometimes receive duplicate data when restarted?
  • Can one change tracking instance subscribe to multiple RDS instances?
  • Will change tracking instances have data inconsistency?
  • How do I handle UserRecordGenerator when consuming subscribed data?
  • Does a topic support creating multiple partitions?
  • Does the change tracking SDK support the Go language?
  • Does the change tracking SDK support the Python language?
  • Does flink-dts-connector support multi-threaded concurrent consumption of subscribed data?
  • Other issues
  • What impact will modifying destination database data have during data synchronization or migration tasks?
  • Can data be written to both the source and destination databases simultaneously during data synchronization or migration tasks?
  • What happens if the password of the source or destination database is modified while a DTS instance is running?
  • Why don't some source or destination databases have public IP as a connection method?
  • Is cross-account data migration or data synchronization supported?
  • Can the source and destination databases be the same database instance?
  • Why do tasks with Redis as the destination database report the error OOM command not allowed when used memory > 'maxmemory'?
  • What is the AliyunDTSRolePolicy permission policy and what is it used for?
  • How do I perform RAM role authorization?
  • Can the account password filled in for DTS tasks be modified?
  • Why do MaxCompute tables have a base suffix?
  • How do I handle being unable to get Kafka topics?
  • Is it possible to set up a MySQL instance locally as a slave to an RDS instance?
  • How can I copy data from an RDS instance to a newly created RDS instance?
  • Does DTS support creating a copy of a database with the same structure but a different name within an RDS instance?
  • How do I handle DTS instances that always show latency?
  • How do I handle fields becoming lowercase after synchronization or migration to the destination database in the old console?
  • Can DTS tasks be restored after being paused?
  • Why does the progress start from 0 after a task is restarted following a pause?
  • What is the principle of lockless DDL changes?
  • Does DTS support pausing synchronization or migration for a specific table?
  • Do I need to repurchase if a task fails?
  • What happens if multiple tasks write to the same destination?
  • Why is an instance still in a locked state after renewal?
  • Do DTS instances support modifying resource groups?
  • Does DTS have a binlog analysis tool?
  • Is it normal for an incremental task to always show 95%?
  • Why hasn't a DTS task been released after more than 7 days?
  • Do tasks that have already been created support modifying ports?
  • Can the RDS MySQL instances mounted under PolarDB-X 1.0 in a DTS task be downgraded?
  • Can source or destination instance specifications be upgraded or downgraded during DTS task operation?
  • What impact do DTS tasks have on source and destination instances?
  • What is the approximate latency of DTS tasks?
  • If the data transmission console automatically jumps to the Data Management (DMS) console, how do I return to the old version of the data transmission console?
  • Does DTS support data encryption?
  • Does DTS support ClickHouse as a source or destination?
  • Does DTS support AnalyticDB for MySQL 2.0 as a source or destination?AnalyticDB for MySQL
  • Why can't I see newly created tasks in the console?
  • What are the reasons for data inconsistency in data verification tasks?
  • Can configuration items that are grayed out in created tasks be modified?
  • How do I configure latency alarms and thresholds?
  • Can I view the reason for failure for tasks that have been failing for a long time?
  • Can tasks that have been failing for a long time be restored?
  • What is the rdsdt_dtsacct account?
  • How do I view information about heap tables, tables without primary keys, compressed tables, and tables with computed columns in SQL Server?
  • How do I handle inconsistent structures between source and destination?
  • Does database, table, and column mapping support modifying column types?
  • Does DTS support limiting the read speed from the source database?
  • How do I clean up orphaned documents in MongoDB (sharded cluster architecture)?
  • How to handle uneven data distribution in MongoDB with sharded cluster architecture?
  • How to set the number of instances displayed per page in the task list in the console?
  • How to handle DTS instance ZooKeeper connection timeout?
  • Why is the DTS network segment automatically added back after being deleted in CEN?
  • Do DTS tasks support export?
  • How to use Java language to call OpenAPI?
  • How to use API to configure ETL functionality for synchronization or migration tasks?
  • Does DTS support Azure SQL database?
  • Will source database data be retained after DTS synchronization or migration ends?
  • Do synchronization or migration instances support rate adjustment after running?
  • Does DTS support sampling synchronization or migration of data by time period?
  • When synchronizing or migrating data, do I need to manually create data tables in the destination database?
  • When synchronizing or migrating data, do the networks of the source and destination databases need to be connected?
Feedback
phone Contact Us