When you call an API operation to configure a Data Transmission Service (DTS) task or query the information about a DTS task, you can specify or query the Reserve parameter. The value of the Reserve parameter is a JSON string. The Reserve parameter allows you to supplement or view the configurations of the source or destination database instance. For example, you can specify the data storage format of the destination Kafka cluster or the ID of the Cloud Enterprise Network (CEN) instance that is used to access a database instance in the Reserve parameter. This topic describes the scenarios and settings of the Reserve parameter.
Usage notes
You must specify common parameters based on the type of the DTS instance and the access methods of the source and destination databases, and then specify other parameters based on the actual situation, such as the types of the source and destination databases.
If the source and destination databases of the DTS instance that you want to configure contain the same parameter settings, you need to specify these parameters in the Reserve parameter only once.
If you want to specify a numeric value, you must enclose the numeric value in double quotation marks (") to convert it into a string.
When you configure a task in the DTS console, you can move the pointer over Next: Save Task Settings and Precheck in the Advanced Settings step and click Preview OpenAPI parameters to view the parameters that are used to configure the task by calling an API operation.
Related API operation
Common parameters
Specify the following parameters in the Reserve parameter based on the type of the DTS instance and the access methods of the source and destination databases.
Table 1. Data migration or synchronization instance
Parameter | Required | Description |
targetTableMode | Yes | The method of processing conflicting tables. Valid values:
|
dts.datamove.source.bps.max | No | The amount of data to be synchronized or migrated during full data synchronization or migration per second. Unit: MB. Valid values: 0 to 9007199254740991. The value must be an integer. |
conflict | No | The conflict processing policy of the two-way data synchronization task. Valid values:
|
filterDDL | No | Specifies whether to ignore DDL operations in the forward task of the two-way data synchronization task. Valid values:
|
autoStartModulesAfterConfig | No | Specifies whether to automatically start a precheck after the DTS task is configured. Valid values:
|
etlOperatorCtl | No | Specifies whether to configure the extract, transform, and load (ETL) feature. Valid values:
|
etlOperatorSetting | No | The ETL statements. For more information, see DSL syntax. |
etlOperatorColumnReference | No | The ETL operator that is dedicated to T+1 business. |
configKeyMap | No | The configuration information of the ETL operator. |
syncArchitecture | No | The synchronization topology. Valid values:
|
dataCheckConfigure | No | The data verification settings. For more information, see DataCheckConfigure parameter description. |
dbListCaseChangeMode | No | The capitalization of object names in the destination database. Valid values:
Note For more information, see Specify the capitalization of object names in the destination instance. |
maxRetryTime | No | The retry time range for a failed connection to the source or destination database. Valid values: 600 to 86400. Unit: seconds. The value must be an integer. The default value is 43,200 seconds, which equals 720 minutes. We recommend that you set this parameter to a value greater than 1,800 seconds, which equals 30 minutes. |
Table 2. Change tracking instance
Parameter | Required | Description |
vpcId | Yes | The ID of the virtual private cloud (VPC) in which the change tracking instance is deployed. |
vswitchId | Yes | The vSwitch ID of the change tracking instance. |
startTime | No | The beginning of the time range to track data changes. Specify a UNIX timestamp. Unit: seconds. |
endTime | No | The end of the time range to track data changes. Specify a UNIX timestamp. Unit: seconds. |
Table 3. Database instance that is accessed by using a CEN instance
Parameter | Required | Description |
srcInstanceId | No | The ID of the CEN instance that is used to access the source database instance. Example:
Note You must specify this parameter if the source database instance is accessed by using a CEN instance. |
destInstanceId | No | The ID of the CEN instance that is used to access the destination database instance. Example:
Note You must specify this parameter if the destination database instance is accessed by using a CEN instance. |
Source database parameters
Specify the following parameters in the Reserve parameter based on the type of the source database.
Table 4. Source ApsaraDB RDS for MySQL instance and self-managed MySQL database
Parameter | Configuration condition | Description |
privilegeMigration | The source and destination databases are ApsaraDB RDS for MySQL instances. | Specifies whether to migrate accounts. Valid values:
|
privilegeDbList | The accounts to be migrated. | |
definer | Specifies whether to retain the original definers of database objects. Valid values: true and false. | |
amp.increment.generator.logmnr.mysql.heartbeat.mode | The source database is a self-managed MySQL database. | Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:
|
whitelist.dms.online.ddl.enable | The DTS instance is a data migration or synchronization instance. The destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, an AnalyticDB for MySQL cluster, or an AnalyticDB for PostgreSQL instance. | Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.
|
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
isAnalyzer | The source and destination databases are ApsaraDB RDS for MySQL instances or self-managed MySQL databases. | Specifies whether to enable the migration assessment feature. The feature checks whether the schemas in the source and destination databases meet the migration requirements. Valid values: true and false. |
srcSSL | The source database is an Alibaba Cloud database instance or a self-managed database hosted on an Elastic Compute Service (ECS) instance. | Specifies whether to encrypt the connection to the source database. Valid values:
|
Table 5. Source PolarDB for MySQL cluster
Parameter | Configuration condition | Description |
amp.increment.generator.logmnr.mysql.heartbeat.mode | This parameter is required for all scenarios. | Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:
|
whitelist.dms.online.ddl.enable | The DTS instance is a data migration or synchronization instance. The destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, an AnalyticDB for MySQL cluster, or an AnalyticDB for PostgreSQL instance. | Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.
|
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule |
Table 6. Source ApsraDB RDS for MariaDB instance
Parameter | Configuration condition | Description |
srcSSL | The source database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance. | Specifies whether to encrypt the connection to the source database. Valid values:
|
Table 7. Source Oracle database
Parameter | Configuration condition | Description |
isTargetDbCaseSensitive | The destination database is an AnalyticDB for PostgreSQL instance. | Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false. |
The destination database is an AnalyticDB for PostgreSQL instance and the objects to be synchronized or migrated include tables without primary keys. | Specifies whether to set the primary keys and distribution keys of all tables that have no primary keys to the row ID. Valid values: true and false. | |
srcOracleType | This parameter is required for all scenarios. | The architecture type of the Oracle database. Valid values:
|
source.column.encoding | The actual encoding format is required. | The actual encoding format. Valid values:
|
Table 8. Source ApsaraDB RDS for SQL Server instance and self-managed SQL Server database
Parameter | Configuration condition | Description |
isTargetDbCaseSensitive | The destination database is an AnalyticDB for PostgreSQL instance. | Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false. |
source.extractor.type | The destination database is not a DataHub project, and an incremental migration or synchronization task needs to be configured. | The mode in which incremental data is migrated or synchronized from the SQL Server database. Valid values:
|
src.sqlserver.schema.mapper.mode | The destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, or an AnalyticDB for MySQL cluster. | The schema mapping mode between the source and destination databases. Valid values:
|
Table 9. Source Tair (Redis OSS-Compatible) and self-managed Redis database
Parameter | Configuration condition | Description |
srcKvStoreMode | The access method of the source database is not set to Alibaba Cloud Instance. | The deployment mode of the source self-managed Redis database. Valid values:
|
any.sink.redis.expire.extension.seconds | This parameter is required for all scenarios. | The extended time period for keys migrated from the source database to the destination database to remain valid. If specific commands are used, we recommend that you set the parameter to a value greater than 600 to ensure data consistency. Unit: seconds. The specific commands include the following commands:
|
any.source.redis.use.slave.node | The value of srcKvStoreMode is set to cluster. | Specifies whether to pull data from master or replica nodes if the source self-managed Redis database is deployed in a cluster. Valid values:
|
Table 10. Source ApsaraDB for MongoDB instance and self-managed MongoDB database
Parameter | Configuration condition | Description |
srcEngineArchType | This parameter is required for all scenarios. | The architecture type of the source MongoDB database. Valid values:
|
sourceShardEndpointUsername | The value of srcEngineArchType is set to 2. | The account used to log on to a shard of the source MongoDB database. |
sourceShardEndpointPassword | The password used to log on to a shard of the source MongoDB database. |
Table 11. Source PolarDB-X 2.0 instance
Parameter | Configuration condition | Description |
amp.increment.generator.logmnr.mysql.heartbeat.mode | This parameter is required for all scenarios. | Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:
|
Table 12. Source PolarDB for PostgreSQL (Compatible with Oracle) cluster
Parameter | Configuration condition | Description |
srcHostPortCtl | The source database is accessed by using a public IP address. | Specifies whether to select multiple data sources for the PolarDB for PostgreSQL (Compatible with Oracle) cluster. Valid values:
|
srcHostPorts | The value of srcHostPortCtl is set to multiple. | The IP addresses and port numbers of the nodes in the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. Specify a value in the |
Table 13. Source TiDB database
Parameter | Configuration condition | Description |
amp.increment.generator.logmnr.mysql.heartbeat.mode | This parameter is required for all scenarios. | Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:
|
isIncMigration | This parameter is required for all scenarios. | Specifies whether to migrate incremental data. Valid values: yes and no. Important You can select only yes for data synchronization tasks. |
srcKafka | The value of isIncMigration is set to yes. | The information about the downstream Kafka cluster of the TiDB database. |
taskType | The type of the Kafka cluster. Specify this parameter based on the deployment location of the Kafka cluster. Valid values:
| |
bisId |
| |
port | The service port number of the Kafka cluster. | |
user | The account of the Kafka cluster. If authentication is disabled for the Kafka cluster, you do not need to specify this parameter. | |
passwd | The password of the Kafka cluster. If authentication is disabled for the Kafka cluster, you do not need to specify this parameter. | |
version | The version of the Kafka cluster. | |
ssl | Specifies whether to encrypt the connection to the Kafka cluster. Valid values:
| |
topic | The topic of the objects to be migrated or synchronized. | |
host | The value of taskType is set to EXPRESS. | The IP address of the Kafka cluster. |
vpcId | The value of taskType is set to ECS. | The ID of the VPC in which the ECS instance resides. |
Destination database parameters
Specify the following parameters in the Reserve parameter based on the type of the destination database.
Table 14. Destination ApsaraDB RDS for MySQL instance and self-managed MySQL database
Parameter | Configuration condition | Description |
privilegeMigration | The source and destination databases are ApsaraDB RDS for MySQL instances. For more information, see Source ApsaraDB RDS for MySQL instance and self-managed MySQL database. | Specifies whether to migrate accounts. |
privilegeDbList | The accounts to be migrated. | |
definer | Specifies whether to retain the original definers of database objects. | |
whitelist.dms.online.ddl.enable | The DTS instance is a data migration or synchronization instance. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters. | Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together. |
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
isAnalyzer | The source and destination databases are ApsaraDB RDS for MySQL instances or self-managed MySQL databases. | Specifies whether to enable the migration assessment feature. The feature checks whether the schemas in the source and destination databases meet the migration requirements. Valid values: true and false. |
triggerMode | This parameter is required for all scenarios. | The method used to migrate triggers from the source database. Valid values:
Note For more information, see Synchronize or migrate triggers from the source database. |
destSSL | The destination database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance. | Specifies whether to encrypt the connection to the destination database. Valid values:
|
src.sqlserver.schema.mapper.mode | The source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database. | The schema mapping mode between the source and destination databases. For more information, see Source ApsaraDB RDS for SQL Server instance and self-managed SQL Server database. |
Table 15. Destination PolarDB for MySQL cluster
Parameter | Configuration condition | Description |
whitelist.dms.online.ddl.enable | The DTS instance is a data migration or synchronization instance. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters. | Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together. |
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
anySinkTableEngineType | This parameter is required for all scenarios. | The engine type of the PolarDB for MySQL cluster. Valid values:
|
triggerMode | This parameter is required for all scenarios. | The method used to migrate triggers from the source database. Valid values:
Note For more information, see Synchronize or migrate triggers from the source database. |
src.sqlserver.schema.mapper.mode | The source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database. | The schema mapping mode between the source and destination databases. For more information, see Source ApsaraDB RDS for SQL Server instance and self-managed SQL Server database. |
Table 16. AnalyticDB for MySQL
Parameter | Configuration condition | Description |
whitelist.dms.online.ddl.enable | The DTS instance is a data migration or synchronization instance. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters. | Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together. |
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
triggerMode | This parameter is required for all scenarios. | The method used to migrate triggers from the source database. Valid values:
Note For more information, see Synchronize or migrate triggers from the source database. |
src.sqlserver.schema.mapper.mode | The source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database. | The schema mapping mode between the source and destination databases. For more information, see Source ApsaraDB RDS for SQL Server instance and self-managed SQL Server database. |
traceDatasource | This parameter is required for all scenarios. | Specifies whether to enable the multi-table merging feature. Valid values: true and false. |
tagColumnValue | You need to specify whether to customize the tag column. | Specifies whether to customize the
|
adsSqlType | You need to select the SQL operations that you want to incrementally synchronize or migrate at the instance level. | The SQL operations that you want to incrementally synchronize or migrate at the instance level. Separate multiple SQL operations with commas (,). Valid values:
|
Table 17. AnalyticDB for PostgreSQL
Parameter | Configuration condition | Description |
whitelist.dms.online.ddl.enable | The DTS instance is a data migration or synchronization instance. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters. | Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together. |
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
isTargetDbCaseSensitive | The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, an Oracle database, an ApsaraDB RDS for SQL Server instance, or a self-managed SQL Server database. | Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false. |
syncOperation | You need to select the SQL operations that you want to incrementally synchronize or migrate at the instance level. | The SQL operations that you want to incrementally synchronize or migrate at the instance level. Separate multiple SQL operations with commas (,). Valid values:
|
Table 18. Destination ApsaraDB RDS for MariaDB instance
Parameter | Configuration condition | Description |
triggerMode | This parameter is required for all scenarios. | The method used to migrate triggers from the source database. Valid values:
Note For more information, see Synchronize or migrate triggers from the source database. |
destSSL | The destination database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance. | Specifies whether to encrypt the connection to the destination database. Valid values:
|
Table 19. Destination ApsaraDB for MongoDB instance and self-managed MongoDB database
Parameter | Configuration condition | Description |
destEngineArchType | This parameter is required for all scenarios. | The architecture type of the destination MongoDB database. Valid values:
|
Table 20. Destination Tair (Redis OSS-Compatible) and self-managed Redis database
Parameter | Configuration condition | Description |
destKvStoreMode | The access method of the destination database is not set to Alibaba Cloud Instance. | The deployment mode of the destination self-managed Redis database. Valid values:
|
any.sink.redis.expire.extension.seconds | This parameter is required for all scenarios. | The extended time period for keys migrated from the source database to the destination database to remain valid. If specific commands are used, we recommend that you set the parameter to a value greater than 600 to ensure data consistency. Unit: seconds. The specific commands include the following commands:
|
Table 21. Destination PolarDB for PostgreSQL (Compatible with Oracle) cluster
Parameter | Configuration condition | Description |
destHostPortCtl | The source database is accessed by using a public IP address. | Specifies whether to select multiple data sources for the PolarDB for PostgreSQL (Compatible with Oracle) cluster. Valid values:
|
destHostPorts | The value of destHostPortCtl is set to multiple. | The IP addresses and port numbers of the nodes in the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster. Specify a value in the |
Table 22. Destination Oracle database
Parameter | Configuration condition | Description |
destOracleType | This parameter is required for all scenarios. | The architecture type of the Oracle database. Valid values:
|
Table 23. Destination DataHub project
Parameter | Configuration condition | Description |
isUseNewAttachedColumn | This parameter is required for all scenarios. | The naming rules for additional columns. Valid values:
|
Table 24. Destination MaxCompute project
Parameter | Configuration condition | Description |
isUseNewAttachedColumn | This parameter is required for all scenarios. | The naming rules for additional columns. Valid values:
|
partition | This parameter is required for all scenarios. | The name of the partition of incremental data tables.
|
Table 25. Destination Elasticsearch cluster
Parameter | Configuration condition | Description |
indexMapping | This parameter is required for all scenarios. | The name of the index to be created in the destination Elasticsearch cluster. Valid values:
|
Table 26. Destination Kafka cluster
Parameter | Configuration condition | Description |
destTopic | This parameter is required for all scenarios. | The topic of the migrated or synchronized objects in the destination Kafka cluster. |
destVersion | This parameter is required for all scenarios. | The version of the destination Kafka cluster. Valid values: 1.0, 0.9, and 0.10. Note If the version of the Kafka cluster is 1.0 or later, set this parameter to 1.0. |
destSSL | This parameter is required for all scenarios. | Specifies whether to encrypt the connection to the destination Kafka cluster. Valid values:
|
sink.kafka.ddl.topic | You need to specify the topic that stores the DDL information. | The topic that stores the DDL information. If you do not specify this parameter, the DDL information is stored in the topic that is specified by destTopic. |
kafkaRecordFormat | This parameter is required for all scenarios. | The storage format in which data is shipped to the destination Kafka cluster. Valid values:
Note For more information, see Data formats of a Kafka cluster. |
destKafkaPartitionKey | This parameter is required for all scenarios. | The policy that is used to synchronize data to Kafka partitions. Valid values:
Note For more information, see Specify the policy for synchronizing data to Kafka partitions. |
destSchemaRegistry | This parameter is required for all scenarios. | Specifies whether to use Kafka Schema Registry. Valid values: yes and no. |
destKafkaSchemaRegistryUrl | The value of destSchemaRegistry is set to true. | The URL or IP address of your Avro schema that is registered with Kafka Schema Registry. |