Parameter | Description |
Initial Synchronization | Select Initial Schema Synchronization, Initial Full Data Synchronization, and Initial Incremental Data Synchronization. For more information, see Initial synchronization types. |
Processing Mode of Conflicting Tables | Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the destination database does not contain tables that have the same names as those in the source database, the precheck is passed. Otherwise, an error is returned during precheck and the data synchronization task cannot be started. Note You can use the object name mapping feature to rename the tables that are synchronized to the destination database. You can use this feature if the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed. For more information, see Rename an object to be synchronized. Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases. Warning If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks. During initial data synchronization, DTS does not synchronize the data records that have the same primary keys as the data records in the destination database. This occurs if the source and destination databases have the same schema. However, DTS synchronizes these data records during incremental data synchronization. If the source and destination databases have different schemas, initial data migration may fail. In this case, only specific columns are migrated, or the data migration task fails.
|
Merge Tables | Yes: In online transaction processing (OLTP) scenarios, sharding is implemented to speed up the response to business tables. You can merge multiple source tables that have the same schema into a single destination table. This feature allows you to synchronize data from multiple tables in the source database to a single table in the Tablestore instance. Note DTS adds a column named __dts_data_source to the destination table in the Tablestore instance. This column is used to record the data source. The data type of this column is VARCHAR. DTS specifies the column values based on the following format: <Data synchronization instance ID>:<Source database name>.<Source table name> . Such column values allow DTS to identify each source table. For example, dts********:dtstestdata.customer1 indicates that the source table is customer1. If you set this parameter to Yes, all selected source tables in the task are merged into the destination table. If you do not need to merge specific source tables, you can create a separate data synchronization task for these tables.
No: the default value.
|
Operation Types | The types of operations that you want to synchronize based on your business requirements. All operation types are selected by default. |
Processing Policy of Dirty Data | The processing policy for handling data write errors: |
Data Write Mode | |
Batch Write Operation | The operation used to write multiple rows of data to the Tablestore instance. To achieve higher read and write efficiency and reduce your costs of using the Tablestore instance, we recommend that you select BulkImportRequest. |
More |
└ Queue Size | The length of the queue for writing data to the Tablestore instance. |
└ Thread Quantity | The number of callback threads for writing data to the Tablestore instance. |
└ Concurrency | The maximum number of concurrent threads for the Tablestore instance. |
└ Buckets | The number of concurrent buckets for incremental and sequential writes. To improve the concurrent write capability, you can set this parameter to a relatively larger value. Note The value must be less than or equal to the maximum number of concurrent threads. |