When you call certain API operations to configure or query migration, synchronization, or tracking tasks, you need to configure or query the Reserve parameter. This parameter is a string in the JSON format that is used to supplement or view configuration information for the source or destination instance, such as the data storage format of a destination Kafka cluster or the instance ID of a Cloud Enterprise Network (CEN) instance. This topic describes the scenarios and configuration instructions for the Reserve parameter.
Notes
Configure the common parameters based on the task type of the instance and the connection type of the database. Then, configure other parameters based on the source and destination database types.
If the source and destination databases share parameters, configure them only once.
If you specify a numeric value, you must enclose it in double quotation marks ("") to convert it to a string.
You can configure the task in the console and then preview the corresponding OpenAPI parameters. This helps you specify the request parameters. For more information, see Preview OpenAPI request parameters.
Related APIs
Common parameters
Configure the parameters in the Reserve parameter based on the DTS instance type and the connection type of the database.
Table 1. Migration or synchronization instances
Parameter | Required | Description |
targetTableMode | Yes | The processing mode for existing tables in the destination database:
|
dts.datamove.source.bps.max | No | The maximum amount of data that can be synchronized or migrated per second for a full or incremental task, in bytes (B). Note This parameter must be used with |
conflict | No | The global conflict resolution policy for a two-way synchronization task. Valid values:
|
filterDDL | No | Specifies whether to filter DDL operations for the forward task in a two-way synchronization. Valid values:
|
autoStartModulesAfterConfig | No | The task startup control parameter. Valid values:
|
etlOperatorCtl | No | Specifies whether to configure the extract, transform, and load (ETL) feature. Valid values:
|
etlOperatorSetting | No | The data processing statements for ETL. For more information, see Data processing DSL syntax. |
etlOperatorColumnReference | No | A field dedicated to T+1 services. This is an ETL operator. |
configKeyMap | No | The configuration information for the ETL operator. |
syncArchitecture | No | The synchronization topology. Valid values:
|
dataCheckConfigure | No | The data validation configuration. For details, see DataCheckConfigure parameter description. |
dbListCaseChangeMode | No | The case sensitivity policy for object names in the destination database. Valid values:
Note For more information, see Case sensitivity policy for object names in the destination database. |
maxRetryTime | No | The time to wait before retrying a connection to the source or destination database after a predictable exception, such as a network error or lock wait. The value must be an integer from 600 to 86,400. The unit is seconds. The default value is 720 minutes (43,200 seconds). Set this value to 30 minutes (1,800 seconds) or more. |
retry.blind.seconds | No | The time to wait before retrying after an unexpected exception occurs in the source or destination database, such as a DDL or DML execution exception. The value must be an integer from 60 to 86,340. The unit is seconds. The default value is 10 minutes (600 seconds). Set this value to 10 minutes (600 seconds) or more. Important
|
modulesExtraConfiguration | No | The extended configuration items for modules. For information about the parameters in the configuration items, see Parameters parameter description. Format: JSON array Example: |
Table 2. Subscription instances
Parameter | Required | Description |
vpcId | Yes | The ID of the virtual private cloud (VPC) where the subscription instance resides. |
vswitchId | Yes | The ID of the virtual switch for the subscription instance. |
startTime | No | The start time for data subscription, as a UNIX timestamp in seconds. |
endTime | No | The end time for data subscription, as a UNIX timestamp in seconds. |
Table 3. Database instances connected through Cloud Enterprise Network (CEN)
Parameter | Required | Description |
srcInstanceId | No | The ID of the CEN instance for the source instance. Example: Note Configure this parameter when the source database instance is connected through CEN. |
destInstanceId | No | The ID of the CEN instance for the destination instance. Example: Note Configure this parameter when the destination database instance is connected through CEN. |
Source database parameter settings
Configure the parameters in the Reserve parameter based on the source database type.
Table 4. Source database type: MySQL (including RDS MySQL and self-managed MySQL)
Parameter | Configuration condition | Description |
privilegeMigration | Applies when the source and destination databases are both RDS for MySQL. | Specifies whether to migrate accounts. Valid values:
|
privilegeDbList | Information about the migration accounts. For the parameter format, see OpenAPI request parameter preview. Example: | |
definer | Specifies whether to keep the original definer of the database object. The value can be true or false. | |
amp.increment.generator.logmnr.mysql.heartbeat.mode | When the source database is self-managed MySQL. | Specifies whether to remove the heartbeat table SQL statements for forward and reverse tasks. Valid values:
|
whitelist.dms.online.ddl.enable | For synchronization or migration instances when the destination database is MySQL (including RDS for MySQL and self-managed MySQL), PolarDB for MySQL, AnalyticDB for MySQL, or AnalyticDB for PostgreSQL. | Use these six parameters together to control whether to copy temporary tables to the destination database. These tables are generated when an online DDL tool runs on a source table.
|
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
isAnalyzer | This parameter takes effect only when both the source and destination instances are MySQL databases. This includes RDS MySQL and self-managed MySQL. | Specifies whether to enable the migration evaluation feature. This feature assesses if the structures of the source and destination databases meet the requirements. The value can be true or false. |
srcSSL | Required when the connection type is Cloud Instance or ECS Self-managed Database. | The connection method for the source database. Valid values:
|
Table 5. Source database type: PolarDB for MySQL
Parameter | Condition | Description |
amp.increment.generator.logmnr.mysql.heartbeat.mode | Required | Specifies whether to remove heartbeat table SQL statements for forward and reverse tasks. Valid values:
|
whitelist.dms.online.ddl.enable | When the destination database is MySQL (including RDS for MySQL and self-managed MySQL), PolarDB for MySQL, AnalyticDB for MySQL, or AnalyticDB for PostgreSQL for a synchronization or migration task. | These six parameters must be used together to control whether to replicate temporary tables to the destination database. The temporary tables are generated when an online DDL tool runs on the source tables.
|
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule |
Table 6. Source database type: RDS MariaDB
Parameter | Condition | Description |
srcSSL | When the connection type is Cloud Instance or Self-managed Database on ECS. | The connection method for the source database. Valid values:
|
Table 7. Source database type: Oracle
Parameter | Configuration condition | Description |
isTargetDbCaseSensitive | Required if the destination database is AnalyticDB for PostgreSQL. | Specifies whether to add quotation marks to destination objects. Valid values are true and false. |
Required if the destination database is AnalyticDB for PostgreSQL and the objects for synchronization or migration include tables without primary keys. | Specifies whether to set ROWID as the primary key and distribution key for all tables without primary keys. Valid values are true and false. | |
srcOracleType | Required | The type of the Oracle instance. Valid values:
|
source.column.encoding | Required to specify the write encoding for your business. | The write encoding for your business. Supported encodings:
|
Table 8. Source database type: SQL Server (including ApsaraDB RDS for SQL Server and self-managed SQL Server)
Parameter | Configuration condition | Description |
isTargetDbCaseSensitive | When the destination database is AnalyticDB for PostgreSQL. | Specifies whether to add quotation marks to destination objects. The valid values are true and false. |
source.extractor.type | Required when the destination database is not DataHub and an incremental task is configured. | The mode for incremental synchronization or migration for SQL Server. The valid values are:
|
src.sqlserver.schema.mapper.mode | When the destination database is MySQL (including ApsaraDB RDS for MySQL and self-managed MySQL), PolarDB for MySQL, or AnalyticDB for MySQL. | The structure mapping mode between the source and destination databases. The valid values are:
|
Table 9. Source database type: Tair/Redis
This includes Alibaba Cloud Tair (Redis-compatible) and self-managed Redis.
Parameter | Configuration condition | Description |
srcKvStoreMode | Required when the connection type of the database instance is not Alibaba Cloud Instance. | The instance pattern of the source self-managed Redis database. Valid values:
|
any.sink.redis.expire.extension.seconds | Required | The additional time in seconds to extend the time-to-live (TTL) for a key when it is migrated from the source database to the destination database. To ensure data consistency, set the extended TTL to 600 seconds or more if you use commands such as the following. |
any.source.redis.use.slave.node | Required when srcKvStoreMode is set to cluster. | Specifies whether to pull data from a slave node. Valid values:
|
Table 10. Source database type: MongoDB (including ApsaraDB for MongoDB and self-managed MongoDB)
Parameter | Condition | Description |
srcEngineArchType | Required | The architecture of the source MongoDB database. Valid values:
|
sourceShardEndpointUsername | Required when srcEngineArchType is set to 2. | The username for the shard of the source MongoDB database. |
sourceShardEndpointPassword | The password for the shard of the source MongoDB database. |
Table 11. Source database type: PolarDB-X 2.0
Parameter | Configuration condition | Description |
amp.increment.generator.logmnr.mysql.heartbeat.mode | Required | Specifies whether to write heartbeat SQL statements for forward and reverse tasks. Valid values:
|
Table 12. Source database type: PolarDB for PostgreSQL (Compatible with Oracle)
Parameter | Condition | Description |
srcHostPortCtl | When the connection type is public IP address. | Specifies whether to use a single source or multiple sources for PolarDB for PostgreSQL (Compatible with Oracle). Valid values:
|
srcHostPorts | When srcHostPortCtl is set to multiple. | The IP addresses and port numbers of the source PolarDB for PostgreSQL (Compatible with Oracle) nodes. Separate multiple |
Table 13. Source database type: TiDB
Parameter | Configuration condition | Description |
amp.increment.generator.logmnr.mysql.heartbeat.mode | Required | Specifies whether to remove the SQL for the heartbeat table from forward and reverse tasks:
|
isIncMigration | Required | Specifies whether to perform incremental migration. Valid values are yes or no. Important Sync tasks support only yes. |
srcKafka | Required if isIncMigration is set to yes. | Information about the Kafka cluster downstream of TiDB. |
taskType | The type of the Kafka cluster. Select a type based on the deployment location of the Kafka cluster. Valid values:
| |
bisId |
| |
port | The service port of the Kafka cluster. | |
user | The username for the Kafka cluster. Leave this blank if authentication is disabled for the Kafka cluster. | |
passwd | The password for the Kafka cluster. Leave this blank if authentication is disabled for the Kafka cluster. | |
version | The version of the Kafka cluster. | |
ssl | The connection method for the Kafka cluster. Valid values:
| |
topic | The topic that contains the objects to migrate or sync. | |
host | Required if taskType is set to EXPRESS. | The IP address of the Kafka cluster. |
vpcId | Required if taskType is set to ECS. | The VPC where the ECS instance resides. |
Destination database parameter settings
Configure the parameters in the Reserve parameter based on the destination database type.
Table 14. Destination database type: MySQL (including RDS for MySQL and self-managed MySQL)
Parameter | Configuration condition | Description |
privilegeMigration | When both the source and destination database types are RDS for MySQL. For more information, see Source database type is MySQL (including RDS for MySQL and self-managed MySQL). | Specifies whether to migrate accounts. |
privilegeDbList | The information about the accounts to be migrated. | |
definer | Specifies whether to retain the original definer of database objects. | |
whitelist.dms.online.ddl.enable | When the source database type is MySQL (including RDS for MySQL and self-managed MySQL) or PolarDB for MySQL, and the instance is for synchronization or migration. For more information, see Source database parameter settings. | These six parameters must be used together. They control whether to replicate the temporary tables generated by the online DDL tool for the source table to the destination database. |
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
isAnalyzer | When the database type of both the source and destination instances is MySQL (including RDS for MySQL and self-managed MySQL). | Specifies whether to enable the migration evaluation feature to assess if the structures of the source and destination databases meet the requirements. Valid values are true and false. |
triggerMode | Required | The method to migrate triggers from the source database. Valid values:
Note For more information, see Configure the method to synchronize or migrate triggers. |
destSSL | When the connection type is Cloud Instance or ECS-hosted Self-managed Database. | The connection method for the destination database. Valid values:
|
src.sqlserver.schema.mapper.mode | When the source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server). | The structure mapping mode between the source and destination databases. For more information, see Source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server). |
Table 15. Destination database type: PolarDB for MySQL
Parameter | Configuration condition | Description |
whitelist.dms.online.ddl.enable | When the source database type is MySQL (including RDS for MySQL and self-managed MySQL) or PolarDB for MySQL, and the instance is for synchronization or migration. For more information, see Source database parameter settings. | These six parameters must be used together. They control whether to replicate the temporary tables generated by the online DDL tool for the source table to the destination database. |
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
anySinkTableEngineType | Required | The engine type of the PolarDB for MySQL instance. Valid values:
|
triggerMode | Required | The method to migrate triggers from the source database. Valid values:
Note For more information, see Configure the method to synchronize or migrate triggers. |
src.sqlserver.schema.mapper.mode | When the source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server). | The structure mapping mode between the source and destination databases. For more information, see Source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server). |
Table 16. Destination database type: AnalyticDB for MySQL
Parameter | Configuration condition | Description |
whitelist.dms.online.ddl.enable | When the source database type is MySQL (including RDS for MySQL and self-managed MySQL) or PolarDB for MySQL, and the instance is for synchronization or migration. For more information, see Source database parameter settings. | These six parameters must be used together. They control whether to replicate the temporary tables generated by the online DDL tool for the source table to the destination database. |
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
triggerMode | Required | The method to migrate triggers from the source database. Valid values:
Note For more information, see Configure the method to synchronize or migrate triggers. |
src.sqlserver.schema.mapper.mode | When the source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server). | The structure mapping mode between the source and destination databases. For more information, see Source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server). |
traceDatasource | Required | Specifies whether to enable multi-table merging. Valid values are true and false. |
tagColumnValue | When you set whether to customize the tag column. | Specifies whether to customize the
|
adsSqlType | When you need to select SQL operations for incremental synchronization or migration at the instance level. | Selects the SQL operations for incremental synchronization or migration at the instance level. Separate multiple SQL operations with commas. Valid values:
|
Table 17. Destination database type: AnalyticDB for PostgreSQL
Parameter | Configuration condition | Description |
whitelist.dms.online.ddl.enable | When the source database type is MySQL (including RDS for MySQL and self-managed MySQL) or PolarDB for MySQL, and the instance is for synchronization or migration. For more information, see Source database parameter settings. | These six parameters must be used together. They control whether to replicate the temporary tables generated by the online DDL tool for the source table to the destination database. |
sqlparser.dms.original.ddl | ||
whitelist.ghost.online.ddl.enable | ||
sqlparser.ghost.original.ddl | ||
online.ddl.shadow.table.rule | ||
online.ddl.trash.table.rule | ||
isTargetDbCaseSensitive | When the source database type is MySQL (including RDS for MySQL and self-managed MySQL), Oracle, or SQL Server (including RDS for SQL Server and self-managed SQL Server). | Specifies whether to add quotation marks to destination objects. Valid values are true and false. |
syncOperation | When you need to select SQL operations for incremental synchronization or migration at the instance level. | Selects the SQL operations for incremental synchronization or migration at the instance level. Separate multiple SQL operations with commas. Valid values:
|
Table 18. Destination database type: RDS for MariaDB
Parameter | Configuration condition | Description |
triggerMode | Required | The method to migrate triggers from the source database. Valid values:
Note For more information, see Configure the method to synchronize or migrate triggers. |
destSSL | When the connection type is Cloud Instance or ECS-hosted Self-managed Database. | The connection method for the destination database. Valid values:
|
Table 19. Destination database type: MongoDB (including ApsaraDB for MongoDB and self-managed MongoDB)
Parameter | Configuration condition | Description |
destEngineArchType | Required | The architecture type of the destination MongoDB database. Valid values:
|
Table 20. Destination database type: Tair/Redis
This includes ApsaraDB for Tair (Redis-compatible) and self-managed Redis.
Parameter | Configuration condition | Description |
destKvStoreMode | When the connection type for the database instance is not Alibaba Cloud Instance. | The instance mode of the self-managed destination Redis. Valid values:
|
any.sink.redis.expire.extension.seconds | Required | The additional time-to-live (TTL) in seconds for a key when it is migrated from the source to the destination database. To ensure data consistency, if you use commands such as the following, set the extended TTL of the key to 600 seconds or more. |
Table 21. Destination database type: PolarDB for PostgreSQL (Compatible with Oracle)
Parameter | Configuration condition | Description |
destHostPortCtl | When the connection type is Public IP. | Specifies whether to select multi-source data for PolarDB for PostgreSQL (Compatible with Oracle). Valid values:
|
destHostPorts | When destHostPortCtl is multiple. | The IP addresses and port numbers of the destination PolarDB for PostgreSQL (Compatible with Oracle) nodes. Separate multiple |
Table 22. Destination database type: Oracle
Parameter | Configuration condition | Description |
destOracleType | Required | The type of the Oracle instance. Valid values:
|
Table 23. Destination database type: DataHub
Parameter | Configuration condition | Description |
isUseNewAttachedColumn | Required | The naming convention for attached columns is as follows:
|
Table 24. Destination database type: MaxCompute
Parameter | Configuration condition | Description |
isUseNewAttachedColumn | Required | The naming convention for attached columns is as follows:
|
partition | Required | The partition name of the incremental log table. Valid values:
|
Table 25. Destination database type: Elasticsearch
Parameter | Configuration condition | Description |
indexMapping | Required | The name of the index created in the destination Elasticsearch instance. Valid values:
|
Table 26. Destination database type: Kafka
Parameter | Configuration condition | Description |
destTopic | Required | The topic in the destination Kafka cluster to which the migration or synchronization object belongs. |
destVersion | Required | The version of the destination Kafka cluster. Valid values are 1.0, 0.9, and 0.10. Note If the Kafka cluster version is 1.0 or later, enter 1.0. |
destSSL | Required | The method to connect to the destination Kafka cluster. Valid values:
|
sink.kafka.ddl.topic | When you need to specify a topic to store DDL information. | The topic used to store DDL information. If you do not enter a value, DDL information is stored in the topic specified by destTopic by default. |
kafkaRecordFormat | Required | The storage format for data delivered to the destination Kafka cluster. Valid values:
Note For more information about formats, see Data storage formats in message queues. |
destKafkaPartitionKey | Required | The Kafka partition synchronization policy. Valid values:
Note For more information about partition synchronization policies, see Kafka partition synchronization policies. |
destSchemaRegistry | Required | Specifies whether to use Kafka Schema Registry. Valid values are yes and no. |
destKafkaSchemaRegistryUrl | When destSchemaRegistry is true. | The URL or IP address for registering the Avro schema in the Kafka Schema Registry. |
Table 27. Destination database type: OSS for data lakehouse integration tasks
Parameter | Configuration condition | Description |
| Required | The storage location of the destination OSS metadata (data catalog). Valid values:
|
| Required | The directory in the destination OSS for data storage. |
| Required | The format of the integrated data in the destination OSS. Valid values:
|
| When | The ID of the AnalyticDB for MySQL 3.0 cluster that stores the destination OSS metadata. |
| When | The ID of the Alibaba Cloud account or RAM user that owns the AnalyticDB for MySQL 3.0 cluster storing the destination OSS metadata. Note The account or user must have write permissions on the AnalyticDB for MySQL 3.0 cluster database. |
| When the source database is MySQL or SQL Server. | The connection parameters of the source database in JSON format. For example: |
| When you configure Spark task parameters. | The Spark task parameters in JSON format. For example: |