All Products
Search
Document Center

Data Transmission Service:Reserve parameter description

Last Updated:Dec 10, 2025

When you call certain API operations to configure or query migration, synchronization, or tracking tasks, you need to configure or query the Reserve parameter. This parameter is a string in the JSON format that is used to supplement or view configuration information for the source or destination instance, such as the data storage format of a destination Kafka cluster or the instance ID of a Cloud Enterprise Network (CEN) instance. This topic describes the scenarios and configuration instructions for the Reserve parameter.

Notes

  • Configure the common parameters based on the task type of the instance and the connection type of the database. Then, configure other parameters based on the source and destination database types.

  • If the source and destination databases share parameters, configure them only once.

  • If you specify a numeric value, you must enclose it in double quotation marks ("") to convert it to a string.

  • You can configure the task in the console and then preview the corresponding OpenAPI parameters. This helps you specify the request parameters. For more information, see Preview OpenAPI request parameters.

Related APIs

Common parameters

Configure the parameters in the Reserve parameter based on the DTS instance type and the connection type of the database.

Table 1. Migration or synchronization instances

Parameter

Required

Description

targetTableMode

Yes

The processing mode for existing tables in the destination database:

  • 0: Runs a precheck and reports an error to stop the task.

  • 2: Ignores the error and continues the task.

dts.datamove.source.bps.max

No

The maximum amount of data that can be synchronized or migrated per second for a full or incremental task, in bytes (B).

Note

This parameter must be used with fullDynamicConfig or incDynamicConfig. For example, when calling the API, pass {"fullDynamicConfig": {"dts.datamove.source.bps.max":10485760},"incDynamicConfig": {"dts.datamove.source.bps.max":10485760}}.

conflict

No

The global conflict resolution policy for a two-way synchronization task. Valid values:

  • overwrite: If a data conflict occurs during synchronization, the conflicting record in the destination database is overwritten.

  • interrupt: If a data conflict occurs during synchronization, the sync task reports an error and exits. The task enters the Failed state. You must fix the task manually.

  • ignore: If a data conflict occurs during synchronization, the current synchronization statement is skipped and the task continues. The conflicting record in the destination database is used.

filterDDL

No

Specifies whether to filter DDL operations for the forward task in a two-way synchronization. Valid values:

  • true: Does not sync DDL operations.

  • false: Syncs DDL operations.

    Important

    The reverse task automatically filters DDL operations.

autoStartModulesAfterConfig

No

The task startup control parameter. Valid values:

  • none: After the DTS task is configured, modules such as precheck are not started. You must start the task manually.

  • auto: After the DTS task is configured, the precheck module and all subsequent modules are automatically started.

    Note

    The default value is auto for the purchase-first scenario. This parameter is not effective for the configure-first scenario.

etlOperatorCtl

No

Specifies whether to configure the extract, transform, and load (ETL) feature. Valid values:

  • Y: Yes, configure the ETL feature.

  • N: No, do not configure the ETL feature.

etlOperatorSetting

No

The data processing statements for ETL. For more information, see Data processing DSL syntax.

etlOperatorColumnReference

No

A field dedicated to T+1 services. This is an ETL operator.

configKeyMap

No

The configuration information for the ETL operator.

syncArchitecture

No

The synchronization topology. Valid values:

  • oneway: one-way synchronization.

  • bidirectional: two-way synchronization.

dataCheckConfigure

No

The data validation configuration. For details, see DataCheckConfigure parameter description.

dbListCaseChangeMode

No

The case sensitivity policy for object names in the destination database. Valid values:

  • default: Uses the default DTS policy.

  • source: Keeps the same case as the source database.

  • dest_upper: Follows the default policy of the destination database (uppercase).

  • dest_lower: Follows the default policy of the destination database (lowercase).

maxRetryTime

No

The time to wait before retrying a connection to the source or destination database after a predictable exception, such as a network error or lock wait. The value must be an integer from 600 to 86,400. The unit is seconds. The default value is 720 minutes (43,200 seconds). Set this value to 30 minutes (1,800 seconds) or more.

retry.blind.seconds

No

The time to wait before retrying after an unexpected exception occurs in the source or destination database, such as a DDL or DML execution exception. The value must be an integer from 60 to 86,340. The unit is seconds. The default value is 10 minutes (600 seconds). Set this value to 10 minutes (600 seconds) or more.

Important
  • The value of retry.blind.seconds must be less than the value of maxRetryTime.

  • If the database has multiple DTS instances, the shortest retry time set among the DTS instances is used.

modulesExtraConfiguration

No

The extended configuration items for modules. For information about the parameters in the configuration items, see Parameters parameter description.

Format: JSON array

Example:

[
   {
   "module": "07",
   "name": "selectdb.reservoir.group.by.target.schema",
   "value": true
   }
]

Table 2. Subscription instances

Parameter

Required

Description

vpcId

Yes

The ID of the virtual private cloud (VPC) where the subscription instance resides.

vswitchId

Yes

The ID of the virtual switch for the subscription instance.

startTime

No

The start time for data subscription, as a UNIX timestamp in seconds.

endTime

No

The end time for data subscription, as a UNIX timestamp in seconds.

Table 3. Database instances connected through Cloud Enterprise Network (CEN)

Parameter

Required

Description

srcInstanceId

No

The ID of the CEN instance for the source instance. Example:

{
   "srcInstanceId": "cen-9kqshqum*******"  }
Note

Configure this parameter when the source database instance is connected through CEN.

destInstanceId

No

The ID of the CEN instance for the destination instance. Example:

{
   "destInstanceId": "cen-9kqshqum*******"  }
Note

Configure this parameter when the destination database instance is connected through CEN.

Source database parameter settings

Configure the parameters in the Reserve parameter based on the source database type.

Table 4. Source database type: MySQL (including RDS MySQL and self-managed MySQL)

Parameter

Configuration condition

Description

privilegeMigration

Applies when the source and destination databases are both RDS for MySQL.

Specifies whether to migrate accounts. Valid values:

  • true: Migrate accounts.

  • false (default): Do not migrate accounts.

privilegeDbList

Information about the migration accounts. For the parameter format, see OpenAPI request parameter preview. Example:

[{\"user\":\"user1\",\"host\":\"%\",\"oldHost\":\"%\",\"is_migrate\":true},{\"user\":\"user2\",\"host\":\"%\",\"oldHost\":\"%\",\"is_migrate\":true}]

definer

Specifies whether to keep the original definer of the database object. The value can be true or false.

amp.increment.generator.logmnr.mysql.heartbeat.mode

When the source database is self-managed MySQL.

Specifies whether to remove the heartbeat table SQL statements for forward and reverse tasks. Valid values:

  • none: Does not write heartbeat SQL information to the source database.

  • N: Writes heartbeat SQL information to the source database.

whitelist.dms.online.ddl.enable

For synchronization or migration instances when the destination database is MySQL (including RDS for MySQL and self-managed MySQL), PolarDB for MySQL, AnalyticDB for MySQL, or AnalyticDB for PostgreSQL.

Use these six parameters together to control whether to copy temporary tables to the destination database. These tables are generated when an online DDL tool runs on a source table.

  • To copy data from temporary tables created by online DDL changes on the source table:

    {
       "whitelist.dms.online.ddl.enable": "true",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "true",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not copy data from temporary tables created by online DDL changes. Instead, sync only the original DDL statements that DMS executed on the source database:

    {
       "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "true",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not copy data from temporary tables created by online DDL changes. Instead, sync only the original DDL statements that gh-ost executed on the source database:

    {
       "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "true",
         "online.ddl.shadow.table.rule": "^_(.+)_(?:gho|new)$",
         "online.ddl.trash.table.rule": "^_(.+)_(?:ghc|del|old)$"
    }
    Note

    Use the default regular expressions or configure custom ones for gh-ost shadow tables (online.ddl.shadow.table.rule) and trash tables (online.ddl.trash.table.rule).

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

isAnalyzer

This parameter takes effect only when both the source and destination instances are MySQL databases. This includes RDS MySQL and self-managed MySQL.

Specifies whether to enable the migration evaluation feature. This feature assesses if the structures of the source and destination databases meet the requirements. The value can be true or false.

srcSSL

Required when the connection type is Cloud Instance or ECS Self-managed Database.

The connection method for the source database. Valid values:

  • 0: Unencrypted connection.

  • 1: SSL-encrypted connection.

Table 5. Source database type: PolarDB for MySQL

Parameter

Condition

Description

amp.increment.generator.logmnr.mysql.heartbeat.mode

Required

Specifies whether to remove heartbeat table SQL statements for forward and reverse tasks. Valid values:

  • none: Does not write heartbeat SQL information to the source database.

  • N: Writes heartbeat SQL information to the source database.

whitelist.dms.online.ddl.enable

When the destination database is MySQL (including RDS for MySQL and self-managed MySQL), PolarDB for MySQL, AnalyticDB for MySQL, or AnalyticDB for PostgreSQL for a synchronization or migration task.

These six parameters must be used together to control whether to replicate temporary tables to the destination database. The temporary tables are generated when an online DDL tool runs on the source tables.

  • To replicate data from temporary tables generated by online DDL changes on the source table:

    {
       "whitelist.dms.online.ddl.enable": "true",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "true",
         "sqlparser.ghost.original.ddl": "false"
    }
  • To synchronize only the original DDL data executed by DMS on the source database, and not replicate data from temporary tables generated by online DDL changes:

    {
       "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "true",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "false"
    }
  • To synchronize only the original DDL data executed by gh-ost on the source database, and not replicate data from temporary tables generated by online DDL changes:

    {
       "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "true",
         "online.ddl.shadow.table.rule": "^_(.+)_(?:gho|new)$",
         "online.ddl.trash.table.rule": "^_(.+)_(?:ghc|del|old)$"
    }
    Note

    Use the default regular expressions or configure your own for gh-ost shadow tables (online.ddl.shadow.table.rule) and trash tables (online.ddl.trash.table.rule).

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

Table 6. Source database type: RDS MariaDB

Parameter

Condition

Description

srcSSL

When the connection type is Cloud Instance or Self-managed Database on ECS.

The connection method for the source database. Valid values:

  • 0: Unencrypted connection.

  • 1: SSL-encrypted connection.

Table 7. Source database type: Oracle

Parameter

Configuration condition

Description

isTargetDbCaseSensitive

Required if the destination database is AnalyticDB for PostgreSQL.

Specifies whether to add quotation marks to destination objects. Valid values are true and false.

isNeedAddRowId

Required if the destination database is AnalyticDB for PostgreSQL and the objects for synchronization or migration include tables without primary keys.

Specifies whether to set ROWID as the primary key and distribution key for all tables without primary keys. Valid values are true and false.

srcOracleType

Required

The type of the Oracle instance. Valid values:

  • sid: A non-RAC instance.

  • serviceName: A Real Application Clusters (RAC) or pluggable database (PDB) instance.

source.column.encoding

Required to specify the write encoding for your business.

The write encoding for your business. Supported encodings:

  • default (Default)

  • GB 2312

  • GBK

  • GB 18030

  • UTF-8

  • UTF-16

  • UTF-32

Table 8. Source database type: SQL Server (including ApsaraDB RDS for SQL Server and self-managed SQL Server)

Parameter

Configuration condition

Description

isTargetDbCaseSensitive

When the destination database is AnalyticDB for PostgreSQL.

Specifies whether to add quotation marks to destination objects. The valid values are true and false.

source.extractor.type

Required when the destination database is not DataHub and an incremental task is configured.

The mode for incremental synchronization or migration for SQL Server. The valid values are:

  • cdc: Uses log parsing for incremental synchronization or migration of non-heap tables, and uses Change Data Capture (CDC) for heap tables.

  • log: Parses source database logs for incremental synchronization or migration.

src.sqlserver.schema.mapper.mode

When the destination database is MySQL (including ApsaraDB RDS for MySQL and self-managed MySQL), PolarDB for MySQL, or AnalyticDB for MySQL.

The structure mapping mode between the source and destination databases. The valid values are:

  • schema.table: Uses the SchemaName.TableName from the source database as the name of the destination table.

  • without.schema: Uses the table name from the source database as the name of the destination table.

    Warning

    If tables with the same name exist in multiple schemas of the source database, data inconsistency or task failure may occur.

Table 9. Source database type: Tair/Redis

Note

This includes Alibaba Cloud Tair (Redis-compatible) and self-managed Redis.

Parameter

Configuration condition

Description

srcKvStoreMode

Required when the connection type of the database instance is not Alibaba Cloud Instance.

The instance pattern of the source self-managed Redis database. Valid values:

  • single: Basic Edition.

  • cluster: Cluster Edition.

any.sink.redis.expire.extension.seconds

Required

The additional time in seconds to extend the time-to-live (TTL) for a key when it is migrated from the source database to the destination database. To ensure data consistency, set the extended TTL to 600 seconds or more if you use commands such as the following.

EXPIRE key seconds
PEXPIRE key milliseconds
EXPIREAT key timestamp
PEXPIREAT key timestampMs

any.source.redis.use.slave.node

Required when srcKvStoreMode is set to cluster.

Specifies whether to pull data from a slave node. Valid values:

  • true: Pull data from a slave node.

  • false (default): Pull data from the master node.

Table 10. Source database type: MongoDB (including ApsaraDB for MongoDB and self-managed MongoDB)

Parameter

Condition

Description

srcEngineArchType

Required

The architecture of the source MongoDB database. Valid values:

  • 0: Single-node architecture.

  • 1: ReplicaSet architecture.

  • 2: Sharded cluster architecture.

sourceShardEndpointUsername

Required when srcEngineArchType is set to 2.

The username for the shard of the source MongoDB database.

sourceShardEndpointPassword

The password for the shard of the source MongoDB database.

Table 11. Source database type: PolarDB-X 2.0

Parameter

Configuration condition

Description

amp.increment.generator.logmnr.mysql.heartbeat.mode

Required

Specifies whether to write heartbeat SQL statements for forward and reverse tasks. Valid values:

  • none: Does not write heartbeat SQL information to the source database.

  • N: Writes heartbeat SQL information to the source database.

Table 12. Source database type: PolarDB for PostgreSQL (Compatible with Oracle)

Parameter

Condition

Description

srcHostPortCtl

When the connection type is public IP address.

Specifies whether to use a single source or multiple sources for PolarDB for PostgreSQL (Compatible with Oracle). Valid values:

  • single: Single Data Source.

  • multiple: Multiple Data Sources.

srcHostPorts

When srcHostPortCtl is set to multiple.

The IP addresses and port numbers of the source PolarDB for PostgreSQL (Compatible with Oracle) nodes. Separate multiple IP:Port pairs with commas.

Table 13. Source database type: TiDB

Parameter

Configuration condition

Description

amp.increment.generator.logmnr.mysql.heartbeat.mode

Required

Specifies whether to remove the SQL for the heartbeat table from forward and reverse tasks:

  • none: Does not write heartbeat SQL statements to the source database.

  • N: Writes heartbeat SQL statements to the source database.

isIncMigration

Required

Specifies whether to perform incremental migration. Valid values are yes or no.

Important

Sync tasks support only yes.

srcKafka

Required if isIncMigration is set to yes.

Information about the Kafka cluster downstream of TiDB.

taskType

The type of the Kafka cluster. Select a type based on the deployment location of the Kafka cluster. Valid values:

  • EXPRESS: Leased Line / VPN Gateway / Smart Gateway.

  • ECS: A self-managed database hosted on an ECS instance.

bisId

  • If taskType is set to ECS, this parameter specifies the ID of the ECS instance.

  • If taskType is set to EXPRESS, this parameter specifies the ID of the virtual private cloud (VPC) that is connected to the source database.

port

The service port of the Kafka cluster.

user

The username for the Kafka cluster. Leave this blank if authentication is disabled for the Kafka cluster.

passwd

The password for the Kafka cluster. Leave this blank if authentication is disabled for the Kafka cluster.

version

The version of the Kafka cluster.

ssl

The connection method for the Kafka cluster. Valid values:

  • 0: Plaintext connection.

  • 3: Encrypted connection using SCRAM-SHA-256.

topic

The topic that contains the objects to migrate or sync.

host

Required if taskType is set to EXPRESS.

The IP address of the Kafka cluster.

vpcId

Required if taskType is set to ECS.

The VPC where the ECS instance resides.

Destination database parameter settings

Configure the parameters in the Reserve parameter based on the destination database type.

Table 14. Destination database type: MySQL (including RDS for MySQL and self-managed MySQL)

Parameter

Configuration condition

Description

privilegeMigration

When both the source and destination database types are RDS for MySQL. For more information, see Source database type is MySQL (including RDS for MySQL and self-managed MySQL).

Specifies whether to migrate accounts.

privilegeDbList

The information about the accounts to be migrated.

definer

Specifies whether to retain the original definer of database objects.

whitelist.dms.online.ddl.enable

When the source database type is MySQL (including RDS for MySQL and self-managed MySQL) or PolarDB for MySQL, and the instance is for synchronization or migration. For more information, see Source database parameter settings.

These six parameters must be used together. They control whether to replicate the temporary tables generated by the online DDL tool for the source table to the destination database.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

isAnalyzer

When the database type of both the source and destination instances is MySQL (including RDS for MySQL and self-managed MySQL).

Specifies whether to enable the migration evaluation feature to assess if the structures of the source and destination databases meet the requirements. Valid values are true and false.

triggerMode

Required

The method to migrate triggers from the source database. Valid values:

  • manual: Manually migrate triggers.

  • auto: Automatically migrate triggers.

destSSL

When the connection type is Cloud Instance or ECS-hosted Self-managed Database.

The connection method for the destination database. Valid values:

  • 0: Non-encrypted connection.

  • 1: SSL secure connection.

src.sqlserver.schema.mapper.mode

When the source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server).

The structure mapping mode between the source and destination databases. For more information, see Source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server).

Table 15. Destination database type: PolarDB for MySQL

Parameter

Configuration condition

Description

whitelist.dms.online.ddl.enable

When the source database type is MySQL (including RDS for MySQL and self-managed MySQL) or PolarDB for MySQL, and the instance is for synchronization or migration. For more information, see Source database parameter settings.

These six parameters must be used together. They control whether to replicate the temporary tables generated by the online DDL tool for the source table to the destination database.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

anySinkTableEngineType

Required

The engine type of the PolarDB for MySQL instance. Valid values:

  • innodb: The default storage engine.

  • xengine: The On-Line Transaction Processing (OLTP) database storage engine.

triggerMode

Required

The method to migrate triggers from the source database. Valid values:

  • manual: Manually migrate triggers.

  • auto: Automatically migrate triggers.

src.sqlserver.schema.mapper.mode

When the source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server).

The structure mapping mode between the source and destination databases. For more information, see Source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server).

Table 16. Destination database type: AnalyticDB for MySQL

Parameter

Configuration condition

Description

whitelist.dms.online.ddl.enable

When the source database type is MySQL (including RDS for MySQL and self-managed MySQL) or PolarDB for MySQL, and the instance is for synchronization or migration. For more information, see Source database parameter settings.

These six parameters must be used together. They control whether to replicate the temporary tables generated by the online DDL tool for the source table to the destination database.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

triggerMode

Required

The method to migrate triggers from the source database. Valid values:

  • manual: Manually migrate triggers.

  • auto: Automatically migrate triggers.

src.sqlserver.schema.mapper.mode

When the source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server).

The structure mapping mode between the source and destination databases. For more information, see Source database type is SQL Server (including RDS for SQL Server and self-managed SQL Server).

traceDatasource

Required

Specifies whether to enable multi-table merging. Valid values are true and false.

tagColumnValue

When you set whether to customize the tag column.

Specifies whether to customize the __dts_data_source tag column. Valid values:

  • tagColumnValue: Customize the tag column.

    Important

    You also need to configure the value of the tag column in the DbList parameter. For more information, see Description of objects for migration, synchronization, or subscription.

  • notTagColumnValue: Do not customize the tag column.

    Important

    Currently, only instances that are configured after purchase support custom tag columns.

adsSqlType

When you need to select SQL operations for incremental synchronization or migration at the instance level.

Selects the SQL operations for incremental synchronization or migration at the instance level. Separate multiple SQL operations with commas. Valid values:

  • insert

  • update

  • delete

  • alterTable

  • truncateTable

  • createTable

  • dropTable

Table 17. Destination database type: AnalyticDB for PostgreSQL

Parameter

Configuration condition

Description

whitelist.dms.online.ddl.enable

When the source database type is MySQL (including RDS for MySQL and self-managed MySQL) or PolarDB for MySQL, and the instance is for synchronization or migration. For more information, see Source database parameter settings.

These six parameters must be used together. They control whether to replicate the temporary tables generated by the online DDL tool for the source table to the destination database.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

isTargetDbCaseSensitive

When the source database type is MySQL (including RDS for MySQL and self-managed MySQL), Oracle, or SQL Server (including RDS for SQL Server and self-managed SQL Server).

Specifies whether to add quotation marks to destination objects. Valid values are true and false.

syncOperation

When you need to select SQL operations for incremental synchronization or migration at the instance level.

Selects the SQL operations for incremental synchronization or migration at the instance level. Separate multiple SQL operations with commas. Valid values:

  • insert

  • update

  • delete

  • alterTable

  • truncateTable

  • createTable

  • dropTable

  • createDB

  • dropDB

Table 18. Destination database type: RDS for MariaDB

Parameter

Configuration condition

Description

triggerMode

Required

The method to migrate triggers from the source database. Valid values:

  • manual: Manually migrate triggers.

  • auto: Automatically migrate triggers.

destSSL

When the connection type is Cloud Instance or ECS-hosted Self-managed Database.

The connection method for the destination database. Valid values:

  • 0: Non-encrypted connection.

  • 1: SSL secure connection.

Table 19. Destination database type: MongoDB (including ApsaraDB for MongoDB and self-managed MongoDB)

Parameter

Configuration condition

Description

destEngineArchType

Required

The architecture type of the destination MongoDB database. Valid values:

  • 0 : Single-node architecture.

  • 1: Replica set architecture.

  • 2: Sharded cluster architecture.

Table 20. Destination database type: Tair/Redis

Note

This includes ApsaraDB for Tair (Redis-compatible) and self-managed Redis.

Parameter

Configuration condition

Description

destKvStoreMode

When the connection type for the database instance is not Alibaba Cloud Instance.

The instance mode of the self-managed destination Redis. Valid values:

  • single: Basic Edition.

  • cluster: Cluster Edition.

any.sink.redis.expire.extension.seconds

Required

The additional time-to-live (TTL) in seconds for a key when it is migrated from the source to the destination database. To ensure data consistency, if you use commands such as the following, set the extended TTL of the key to 600 seconds or more.

EXPIRE key seconds
PEXPIRE key milliseconds
EXPIREAT key timestamp
PEXPIREAT key timestampMs

Table 21. Destination database type: PolarDB for PostgreSQL (Compatible with Oracle)

Parameter

Configuration condition

Description

destHostPortCtl

When the connection type is Public IP.

Specifies whether to select multi-source data for PolarDB for PostgreSQL (Compatible with Oracle). Valid values:

  • single: This corresponds to Single Data Source.

  • multiple: This corresponds to Multiple Data Sources.

destHostPorts

When destHostPortCtl is multiple.

The IP addresses and port numbers of the destination PolarDB for PostgreSQL (Compatible with Oracle) nodes. Separate multiple IP:Port pairs with commas.

Table 22. Destination database type: Oracle

Parameter

Configuration condition

Description

destOracleType

Required

The type of the Oracle instance. Valid values:

  • sid: A non-RAC instance.

  • serviceName: A RAC or PDB instance.

Table 23. Destination database type: DataHub

Parameter

Configuration condition

Description

isUseNewAttachedColumn

Required

The naming convention for attached columns is as follows:

  • Use the new naming convention for attached columns: Set isUseNewAttachedColumn to true.

  • Use the old attached column naming convention: isUseNewAttachedColumn is set to false.

Table 24. Destination database type: MaxCompute

Parameter

Configuration condition

Description

isUseNewAttachedColumn

Required

The naming convention for attached columns is as follows:

  • Use the new naming convention for attached columns: Set isUseNewAttachedColumn to true.

  • Using the old attached column naming convention: isUseNewAttachedColumn is set to false, .

partition

Required

The partition name of the incremental log table. Valid values:

  • When isUseNewAttachedColumn is true:

    • modifytime_year

    • modifytime_month

    • modifytime_day

    • modifytime_hour

    • modifytime_minute

  • When isUseNewAttachedColumn is false:

    • new_dts_sync_modifytime_year

    • new_dts_sync_modifytime_month

    • new_dts_sync_modifytime_day

    • new_dts_sync_modifytime_hour

    • new_dts_sync_modifytime_minute

Table 25. Destination database type: Elasticsearch

Parameter

Configuration condition

Description

indexMapping

Required

The name of the index created in the destination Elasticsearch instance. Valid values:

  • tb: The created index name is the same as the table name.

  • db_tb: The created index name is a combination of the database name, an underscore (_), and the table name.

Table 26. Destination database type: Kafka

Parameter

Configuration condition

Description

destTopic

Required

The topic in the destination Kafka cluster to which the migration or synchronization object belongs.

destVersion

Required

The version of the destination Kafka cluster. Valid values are 1.0, 0.9, and 0.10.

Note

If the Kafka cluster version is 1.0 or later, enter 1.0.

destSSL

Required

The method to connect to the destination Kafka cluster. Valid values:

  • 0: Non-encrypted connection.

  • 3: Encrypted connection using SCRAM-SHA-256.

sink.kafka.ddl.topic

When you need to specify a topic to store DDL information.

The topic used to store DDL information. If you do not enter a value, DDL information is stored in the topic specified by destTopic by default.

kafkaRecordFormat

Required

The storage format for data delivered to the destination Kafka cluster. Valid values:

  • canal_json: Uses Canal to parse incremental logs of the database and transmit incremental data to the destination Kafka cluster. The data is stored in Canal JSON format.

  • dts_avro: A data serialization format that converts data structures or objects into a format that is easy to store or transmit.

  • shareplex_json: Uses the Shareplex data replication software to read data from the source database. When writing data to the destination Kafka cluster, the data is stored in Shareplex JSON format.

  • debezium: A tool that captures data changes and supports real-time streaming of data updates from the source database to the destination Kafka cluster.

Note

For more information about formats, see Data storage formats in message queues.

destKafkaPartitionKey

Required

The Kafka partition synchronization policy. Valid values:

  • none: Delivers all data and DDL information to partition 0 of the destination topic.

  • database_table: Merges the database name and table name to serve as the partition key for hash calculation. Then, it delivers the data and DDL information of each table to different partitions of the destination topic.

  • columns: Uses columns in the table (the primary key by default, or a unique key if no primary key exists) as the partition key for hash calculation. Then, it delivers different rows to different partitions of the destination topic. You can also specify one or more columns as the partition key for hash calculation.

Note

For more information about partition synchronization policies, see Kafka partition synchronization policies.

destSchemaRegistry

Required

Specifies whether to use Kafka Schema Registry. Valid values are yes and no.

destKafkaSchemaRegistryUrl

When destSchemaRegistry is true.

The URL or IP address for registering the Avro schema in the Kafka Schema Registry.

Table 27. Destination database type: OSS for data lakehouse integration tasks

Parameter

Configuration condition

Description

fusionMetastoreMod

Required

The storage location of the destination OSS metadata (data catalog). Valid values:

  • dms: Stored in DMS.

  • adb: Stored in an AnalyticDB for MySQL 3.0 cluster.

  • none: None. Metadata is not stored.

fusionOssFilePath

Required

The directory in the destination OSS for data storage.

fusionOssFileFormat

Required

The format of the integrated data in the destination OSS. Valid values:

  • DELTA: Delta format.

  • PARQUET: Parquet format.

  • HUDI: Hudi format.

fusionAdbCrossMdsDbClusterId

When fusionMetastoreMod is adb.

The ID of the AnalyticDB for MySQL 3.0 cluster that stores the destination OSS metadata.

fusionAdbCrossMdsRamId

When fusionMetastoreMod is adb.

The ID of the Alibaba Cloud account or RAM user that owns the AnalyticDB for MySQL 3.0 cluster storing the destination OSS metadata.

Note

The account or user must have write permissions on the AnalyticDB for MySQL 3.0 cluster database.

srcConnArgs

When the source database is MySQL or SQL Server.

The connection parameters of the source database in JSON format. For example:

"srcConnArgs": {
 "rewriteBatchedStatements": "true",
 "tinyInt1isBit": "false",
 "zeroDateTimeBehavior": "convertToNull",
 "yearIsDateType": "false"
 }

sparkExtraArgs

When you configure Spark task parameters.

The Spark task parameters in JSON format. For example:

 "sparkExtraArgs": {
 "spark.executor.extraJavaOptions": "-verbose:class",
 "spark.driver.extraJavaOptions": "-verbose:class"
 }