All Products
Search
Document Center

Data Transmission Service:Reserve parameter

Last Updated:Nov 19, 2024

When you call an API operation to configure a Data Transmission Service (DTS) task or query the information about a DTS task, you can specify or query the Reserve parameter. The value of the Reserve parameter is a JSON string. The Reserve parameter allows you to supplement or view the configurations of the source or destination database instance. For example, you can specify the data storage format of the destination Kafka cluster or the ID of the Cloud Enterprise Network (CEN) instance that is used to access a database instance in the Reserve parameter. This topic describes the scenarios and settings of the Reserve parameter.

Usage notes

  • You must specify common parameters based on the type of the DTS instance and the access methods of the source and destination databases, and then specify other parameters based on the actual situation, such as the types of the source and destination databases.

  • If the source and destination databases of the DTS instance that you want to configure contain the same parameter settings, you need to specify these parameters in the Reserve parameter only once.

  • If you want to specify a numeric value, you must enclose the numeric value in double quotation marks (") to convert it into a string.

  • When you configure a task in the DTS console, you can move the pointer over Next: Save Task Settings and Precheck in the Advanced Settings step and click Preview OpenAPI parameters to view the parameters that are used to configure the task by calling an API operation.

Related API operation

Common parameters

Specify the following parameters in the Reserve parameter based on the type of the DTS instance and the access methods of the source and destination databases.

Table 1. Data migration or synchronization instance

Parameter

Required

Description

targetTableMode

Yes

The method of processing conflicting tables. Valid values:

  • 0: performs a precheck and reports errors.

  • 2: ignores errors and proceeds.

dts.datamove.source.bps.max

No

The amount of data to be synchronized or migrated during full data synchronization or migration per second. Unit: MB. Valid values: 0 to 9007199254740991. The value must be an integer.

conflict

No

The conflict processing policy of the two-way data synchronization task. Valid values:

  • overwrite: If a data conflict occurs during data synchronization, the conflicting data in the destination database is overwritten.

  • interrupt: If a data conflict occurs during data synchronization, an error is reported and the data synchronization task fails. You must manually modify and resume the data synchronization task.

  • ignore: If a data conflict occurs during data synchronization, the conflicting data in the destination database is retained and the data synchronization task continues.

filterDDL

No

Specifies whether to ignore DDL operations in the forward task of the two-way data synchronization task. Valid values:

  • true: does not synchronize DDL operations.

  • false: synchronizes DDL operations.

    Important

    By default, the reverse task ignores DDL operations.

autoStartModulesAfterConfig

No

Specifies whether to automatically start a precheck after the DTS task is configured. Valid values:

  • none (default): does not start a precheck or subsequent operations after the DTS task is configured. In this case, you must manually start the DTS task.

  • auto: automatically starts a precheck and all subsequent operations after the DTS task is configured.

etlOperatorCtl

No

Specifies whether to configure the extract, transform, and load (ETL) feature. Valid values:

  • Y

  • N

etlOperatorSetting

No

The ETL statements. For more information, see DSL syntax.

etlOperatorColumnReference

No

The ETL operator that is dedicated to T+1 business.

configKeyMap

No

The configuration information of the ETL operator.

syncArchitecture

No

The synchronization topology. Valid values:

  • oneway: one-way synchronization.

  • bidirectional: two-way synchronization.

dataCheckConfigure

No

The data verification settings. For more information, see DataCheckConfigure parameter description.

dbListCaseChangeMode

No

The capitalization of object names in the destination database. Valid values:

  • default: uses the default capitalization policy of DTS.

  • source: uses the capitalization policy of the source database.

  • dest_upper: uses the uppercase.

  • dest_lower: uses the lowercase.

maxRetryTime

No

The retry time range for a failed connection to the source or destination database. Valid values: 600 to 86400. Unit: seconds. The value must be an integer. The default value is 43,200 seconds, which equals 720 minutes. We recommend that you set this parameter to a value greater than 1,800 seconds, which equals 30 minutes.

Table 2. Change tracking instance

Parameter

Required

Description

vpcId

Yes

The ID of the virtual private cloud (VPC) in which the change tracking instance is deployed.

vswitchId

Yes

The vSwitch ID of the change tracking instance.

startTime

No

The beginning of the time range to track data changes. Specify a UNIX timestamp. Unit: seconds.

endTime

No

The end of the time range to track data changes. Specify a UNIX timestamp. Unit: seconds.

Table 3. Database instance that is accessed by using a CEN instance

Parameter

Required

Description

srcInstanceId

No

The ID of the CEN instance that is used to access the source database instance. Example:

{
   "srcInstanceId": "cen-9kqshqum*******"  }
Note

You must specify this parameter if the source database instance is accessed by using a CEN instance.

destInstanceId

No

The ID of the CEN instance that is used to access the destination database instance. Example:

{
   "destInstanceId": "cen-9kqshqum*******"  }
Note

You must specify this parameter if the destination database instance is accessed by using a CEN instance.

Source database parameters

Specify the following parameters in the Reserve parameter based on the type of the source database.

Table 4. Source ApsaraDB RDS for MySQL instance and self-managed MySQL database

Parameter

Configuration condition

Description

privilegeMigration

The source and destination databases are ApsaraDB RDS for MySQL instances.

Specifies whether to migrate accounts. Valid values:

  • true

  • false (default)

privilegeDbList

The accounts to be migrated.

definer

Specifies whether to retain the original definers of database objects. Valid values: true and false.

amp.increment.generator.logmnr.mysql.heartbeat.mode

The source database is a self-managed MySQL database.

Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:

  • none: does not write SQL operations on heartbeat tables to the source database.

  • N: writes SQL operations on heartbeat tables to the source database.

whitelist.dms.online.ddl.enable

The DTS instance is a data migration or synchronization instance. The destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, an AnalyticDB for MySQL cluster, or an AnalyticDB for PostgreSQL instance.

Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.

  • Replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database.

    {
       "whitelist.dms.online.ddl.enable": "true",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "true",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database, but synchronize only the original DDL operations that are performed by using DMS in the source database.

    {
       "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "true",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database, but synchronize only the original DDL operations that are performed by using gh-ost in the source database.

    {
       "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "true",
         "online.ddl.shadow.table.rule": "^_(.+)_(?:gho|new)$",
         "online.ddl.trash.table.rule": "^_(.+)_(?:ghc|del|old)$"
    }
    Note

    You can specify the default or custom regular expressions for online.ddl.shadow.table.rule and online.ddl.trash.table.rule to filter out the shadow tables of the gh-ost tool and tables that are not required.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

isAnalyzer

The source and destination databases are ApsaraDB RDS for MySQL instances or self-managed MySQL databases.

Specifies whether to enable the migration assessment feature. The feature checks whether the schemas in the source and destination databases meet the migration requirements. Valid values: true and false.

srcSSL

The source database is an Alibaba Cloud database instance or a self-managed database hosted on an Elastic Compute Service (ECS) instance.

Specifies whether to encrypt the connection to the source database. Valid values:

  • 0: does not encrypt the connection.

  • 1: encrypts the connection by using SSL.

Table 5. Source PolarDB for MySQL cluster

Parameter

Configuration condition

Description

amp.increment.generator.logmnr.mysql.heartbeat.mode

This parameter is required for all scenarios.

Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:

  • none: does not write SQL operations on heartbeat tables to the source database.

  • N: writes SQL operations on heartbeat tables to the source database.

whitelist.dms.online.ddl.enable

The DTS instance is a data migration or synchronization instance. The destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, an AnalyticDB for MySQL cluster, or an AnalyticDB for PostgreSQL instance.

Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.

  • Replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database.

    {
       "whitelist.dms.online.ddl.enable": "true",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "true",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database, but synchronize only the original DDL operations that are performed by using DMS in the source database.

    {
       "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "true",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database, but synchronize only the original DDL operations that are performed by using gh-ost in the source database.

    {
       "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "true",
         "online.ddl.shadow.table.rule": "^_(.+)_(?:gho|new)$",
         "online.ddl.trash.table.rule": "^_(.+)_(?:ghc|del|old)$"
    }
    Note

    You can specify the default or custom regular expressions for online.ddl.shadow.table.rule and online.ddl.trash.table.rule to filter out the shadow tables of the gh-ost tool and tables that are not required.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

Table 6. Source ApsraDB RDS for MariaDB instance

Parameter

Configuration condition

Description

srcSSL

The source database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance.

Specifies whether to encrypt the connection to the source database. Valid values:

  • 0: does not encrypt the connection.

  • 1: encrypts the connection by using SSL.

Table 7. Source Oracle database

Parameter

Configuration condition

Description

isTargetDbCaseSensitive

The destination database is an AnalyticDB for PostgreSQL instance.

Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false.

isNeedAddRowId

The destination database is an AnalyticDB for PostgreSQL instance and the objects to be synchronized or migrated include tables without primary keys.

Specifies whether to set the primary keys and distribution keys of all tables that have no primary keys to the row ID. Valid values: true and false.

srcOracleType

This parameter is required for all scenarios.

The architecture type of the Oracle database. Valid values:

  • sid: non-Real Application Cluster (RAC).

  • serviceName: RAC or pluggable database (PDB).

source.column.encoding

The actual encoding format is required.

The actual encoding format. Valid values:

  • default (default)

  • GB 2312

  • GBK

  • GB 18030

  • UTF-8

  • UTF-16

  • UTF-32

Table 8. Source ApsaraDB RDS for SQL Server instance and self-managed SQL Server database

Parameter

Configuration condition

Description

isTargetDbCaseSensitive

The destination database is an AnalyticDB for PostgreSQL instance.

Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false.

source.extractor.type

The destination database is not a DataHub project, and an incremental migration or synchronization task needs to be configured.

The mode in which incremental data is migrated or synchronized from the SQL Server database. Valid values:

  • cdc: performs incremental data synchronization or migration by parsing the logs of the source database for non-heap tables, and performs change data capture (CDC)-based incremental data synchronization or migration for heap tables.

  • log: performs incremental data synchronization or migration by parsing the logs of the source database.

src.sqlserver.schema.mapper.mode

The destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, or an AnalyticDB for MySQL cluster.

The schema mapping mode between the source and destination databases. Valid values:

  • schema.table: uses Source schema name.Source table name as the name of the destination table.

  • without.schema: uses the source table name as the name of the destination table.

    Warning

    If multiple schemas in the source database contain tables that have the same name, data inconsistency may occur or the DTS task may fail.

Table 9. Source Tair (Redis OSS-Compatible) and self-managed Redis database

Parameter

Configuration condition

Description

srcKvStoreMode

The access method of the source database is not set to Alibaba Cloud Instance.

The deployment mode of the source self-managed Redis database. Valid values:

  • single: standalone deployment.

  • cluster: cluster deployment.

any.sink.redis.expire.extension.seconds

This parameter is required for all scenarios.

The extended time period for keys migrated from the source database to the destination database to remain valid. If specific commands are used, we recommend that you set the parameter to a value greater than 600 to ensure data consistency. Unit: seconds. The specific commands include the following commands:

EXPIRE key seconds
PEXPIRE key milliseconds
EXPIREAT key timestamp
PEXPIREAT key timestampMs

any.source.redis.use.slave.node

The value of srcKvStoreMode is set to cluster.

Specifies whether to pull data from master or replica nodes if the source self-managed Redis database is deployed in a cluster. Valid values:

  • true: pulls data from replica nodes.

  • false (default): pulls data from master nodes.

Table 10. Source ApsaraDB for MongoDB instance and self-managed MongoDB database

Parameter

Configuration condition

Description

srcEngineArchType

This parameter is required for all scenarios.

The architecture type of the source MongoDB database. Valid values:

  • 0: standalone architecture.

  • 1: replica set architecture.

  • 2: sharded cluster architecture.

sourceShardEndpointUsername

The value of srcEngineArchType is set to 2.

The account used to log on to a shard of the source MongoDB database.

sourceShardEndpointPassword

The password used to log on to a shard of the source MongoDB database.

Table 11. Source PolarDB-X 2.0 instance

Parameter

Configuration condition

Description

amp.increment.generator.logmnr.mysql.heartbeat.mode

This parameter is required for all scenarios.

Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:

  • none: does not write SQL operations on heartbeat tables to the source database.

  • N: writes SQL operations on heartbeat tables to the source database.

Table 12. Source PolarDB for PostgreSQL (Compatible with Oracle) cluster

Parameter

Configuration condition

Description

srcHostPortCtl

The source database is accessed by using a public IP address.

Specifies whether to select multiple data sources for the PolarDB for PostgreSQL (Compatible with Oracle) cluster. Valid values:

  • single: Single Data Source.

  • multiple: Multiple Data Sources.

srcHostPorts

The value of srcHostPortCtl is set to multiple.

The IP addresses and port numbers of the nodes in the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. Specify a value in the IP address:Port number format. Separate multiple values with commas (,).

Table 13. Source TiDB database

Parameter

Configuration condition

Description

amp.increment.generator.logmnr.mysql.heartbeat.mode

This parameter is required for all scenarios.

Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:

  • none: does not write SQL operations on heartbeat tables to the source database.

  • N: writes SQL operations on heartbeat tables to the source database.

isIncMigration

This parameter is required for all scenarios.

Specifies whether to migrate incremental data. Valid values: yes and no.

Important

You can select only yes for data synchronization tasks.

srcKafka

The value of isIncMigration is set to yes.

The information about the downstream Kafka cluster of the TiDB database.

taskType

The type of the Kafka cluster. Specify this parameter based on the deployment location of the Kafka cluster. Valid values:

  • EXPRESS: connected over Express Connect, VPN Gateway, or Smart Access Gateway.

  • ECS: self-managed cluster hosted on an ECS instance.

bisId

  • The ID of the ECS instance if the value of taskType is set to ECS.

  • The ID of the VPC that is connected to the source database if the value of taskType is set to EXPRESS.

port

The service port number of the Kafka cluster.

user

The account of the Kafka cluster. If authentication is disabled for the Kafka cluster, you do not need to specify this parameter.

passwd

The password of the Kafka cluster. If authentication is disabled for the Kafka cluster, you do not need to specify this parameter.

version

The version of the Kafka cluster.

ssl

Specifies whether to encrypt the connection to the Kafka cluster. Valid values:

  • 0: does not encrypt the connection.

  • 3: encrypts the connection by using the SCRAM-SHA-256 algorithm.

topic

The topic of the objects to be migrated or synchronized.

host

The value of taskType is set to EXPRESS.

The IP address of the Kafka cluster.

vpcId

The value of taskType is set to ECS.

The ID of the VPC in which the ECS instance resides.

Destination database parameters

Specify the following parameters in the Reserve parameter based on the type of the destination database.

Table 14. Destination ApsaraDB RDS for MySQL instance and self-managed MySQL database

Parameter

Configuration condition

Description

privilegeMigration

The source and destination databases are ApsaraDB RDS for MySQL instances. For more information, see Source ApsaraDB RDS for MySQL instance and self-managed MySQL database.

Specifies whether to migrate accounts.

privilegeDbList

The accounts to be migrated.

definer

Specifies whether to retain the original definers of database objects.

whitelist.dms.online.ddl.enable

The DTS instance is a data migration or synchronization instance. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters.

Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

isAnalyzer

The source and destination databases are ApsaraDB RDS for MySQL instances or self-managed MySQL databases.

Specifies whether to enable the migration assessment feature. The feature checks whether the schemas in the source and destination databases meet the migration requirements. Valid values: true and false.

triggerMode

This parameter is required for all scenarios.

The method used to migrate triggers from the source database. Valid values:

  • manual

  • auto

destSSL

The destination database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance.

Specifies whether to encrypt the connection to the destination database. Valid values:

  • 0: does not encrypt the connection.

  • 1: encrypts the connection by using SSL.

src.sqlserver.schema.mapper.mode

The source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database.

The schema mapping mode between the source and destination databases. For more information, see Source ApsaraDB RDS for SQL Server instance and self-managed SQL Server database.

Table 15. Destination PolarDB for MySQL cluster

Parameter

Configuration condition

Description

whitelist.dms.online.ddl.enable

The DTS instance is a data migration or synchronization instance. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters.

Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

anySinkTableEngineType

This parameter is required for all scenarios.

The engine type of the PolarDB for MySQL cluster. Valid values:

  • innodb: the default storage engine.

  • xengine: the database storage engine for online transaction processing (OLTP).

triggerMode

This parameter is required for all scenarios.

The method used to migrate triggers from the source database. Valid values:

  • manual

  • auto

src.sqlserver.schema.mapper.mode

The source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database.

The schema mapping mode between the source and destination databases. For more information, see Source ApsaraDB RDS for SQL Server instance and self-managed SQL Server database.

Table 16. AnalyticDB for MySQL

Parameter

Configuration condition

Description

whitelist.dms.online.ddl.enable

The DTS instance is a data migration or synchronization instance. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters.

Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

triggerMode

This parameter is required for all scenarios.

The method used to migrate triggers from the source database. Valid values:

  • manual

  • auto

src.sqlserver.schema.mapper.mode

The source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database.

The schema mapping mode between the source and destination databases. For more information, see Source ApsaraDB RDS for SQL Server instance and self-managed SQL Server database.

traceDatasource

This parameter is required for all scenarios.

Specifies whether to enable the multi-table merging feature. Valid values: true and false.

tagColumnValue

You need to specify whether to customize the tag column.

Specifies whether to customize the __dts_data_source tag column. Valid values:

  • tagColumnValue: customizes the tag column.

    Important

    If you set this parameter to tagColumnValue, you must specify the value of the tag column in the DbList parameter. For more information, see Objects of DTS tasks.

  • notTagColumnValue: does not customize the tag column.

    Important

    The tag column can be customized only for DTS instances that are configured after purchase.

adsSqlType

You need to select the SQL operations that you want to incrementally synchronize or migrate at the instance level.

The SQL operations that you want to incrementally synchronize or migrate at the instance level. Separate multiple SQL operations with commas (,). Valid values:

  • insert

  • update

  • delete

  • alterTable

  • truncateTable

  • createTable

  • dropTable

Table 17. AnalyticDB for PostgreSQL

Parameter

Configuration condition

Description

whitelist.dms.online.ddl.enable

The DTS instance is a data migration or synchronization instance. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters.

Specifies whether to replicate the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.

sqlparser.dms.original.ddl

whitelist.ghost.online.ddl.enable

sqlparser.ghost.original.ddl

online.ddl.shadow.table.rule

online.ddl.trash.table.rule

isTargetDbCaseSensitive

The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, an Oracle database, an ApsaraDB RDS for SQL Server instance, or a self-managed SQL Server database.

Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false.

syncOperation

You need to select the SQL operations that you want to incrementally synchronize or migrate at the instance level.

The SQL operations that you want to incrementally synchronize or migrate at the instance level. Separate multiple SQL operations with commas (,). Valid values:

  • insert

  • update

  • delete

  • alterTable

  • truncateTable

  • createTable

  • dropTable

  • createDB

  • dropDB

Table 18. Destination ApsaraDB RDS for MariaDB instance

Parameter

Configuration condition

Description

triggerMode

This parameter is required for all scenarios.

The method used to migrate triggers from the source database. Valid values:

  • manual

  • auto

destSSL

The destination database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance.

Specifies whether to encrypt the connection to the destination database. Valid values:

  • 0: does not encrypt the connection.

  • 1: encrypts the connection by using SSL.

Table 19. Destination ApsaraDB for MongoDB instance and self-managed MongoDB database

Parameter

Configuration condition

Description

destEngineArchType

This parameter is required for all scenarios.

The architecture type of the destination MongoDB database. Valid values:

  • 0: standalone architecture.

  • 1: replica set architecture.

  • 2: sharded cluster architecture.

Table 20. Destination Tair (Redis OSS-Compatible) and self-managed Redis database

Parameter

Configuration condition

Description

destKvStoreMode

The access method of the destination database is not set to Alibaba Cloud Instance.

The deployment mode of the destination self-managed Redis database. Valid values:

  • single: standalone deployment.

  • cluster: cluster deployment.

any.sink.redis.expire.extension.seconds

This parameter is required for all scenarios.

The extended time period for keys migrated from the source database to the destination database to remain valid. If specific commands are used, we recommend that you set the parameter to a value greater than 600 to ensure data consistency. Unit: seconds. The specific commands include the following commands:

EXPIRE key seconds
PEXPIRE key milliseconds
EXPIREAT key timestamp
PEXPIREAT key timestampMs

Table 21. Destination PolarDB for PostgreSQL (Compatible with Oracle) cluster

Parameter

Configuration condition

Description

destHostPortCtl

The source database is accessed by using a public IP address.

Specifies whether to select multiple data sources for the PolarDB for PostgreSQL (Compatible with Oracle) cluster. Valid values:

  • single: Single Data Source.

  • multiple: Multiple Data Sources.

destHostPorts

The value of destHostPortCtl is set to multiple.

The IP addresses and port numbers of the nodes in the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster. Specify a value in the IP address:Port number format. Separate multiple values with commas (,).

Table 22. Destination Oracle database

Parameter

Configuration condition

Description

destOracleType

This parameter is required for all scenarios.

The architecture type of the Oracle database. Valid values:

  • sid: non-RAC.

  • serviceName: RAC or PDB.

Table 23. Destination DataHub project

Parameter

Configuration condition

Description

isUseNewAttachedColumn

This parameter is required for all scenarios.

The naming rules for additional columns. Valid values:

  • true: uses the new naming rules.

  • false: uses the original naming rules.

Table 24. Destination MaxCompute project

Parameter

Configuration condition

Description

isUseNewAttachedColumn

This parameter is required for all scenarios.

The naming rules for additional columns. Valid values:

  • true: uses the new naming rules.

  • false: uses the original naming rules.

partition

This parameter is required for all scenarios.

The name of the partition of incremental data tables.

  • Valid values if isUseNewAttachedColumn is set to true:

    • modifytime_year

    • modifytime_month

    • modifytime_day

    • modifytime_hour

    • modifytime_minute

  • Valid values if isUseNewAttachedColumn is set to false:

    • new_dts_sync_modifytime_year

    • new_dts_sync_modifytime_month

    • new_dts_sync_modifytime_day

    • new_dts_sync_modifytime_hour

    • new_dts_sync_modifytime_minute

Table 25. Destination Elasticsearch cluster

Parameter

Configuration condition

Description

indexMapping

This parameter is required for all scenarios.

The name of the index to be created in the destination Elasticsearch cluster. Valid values:

  • tb: The name of the index to be created is the same as that of the table.

  • db_tb: The name of the index to be created consists of the database name, an underscore (_), and the table name in sequence.

Table 26. Destination Kafka cluster

Parameter

Configuration condition

Description

destTopic

This parameter is required for all scenarios.

The topic of the migrated or synchronized objects in the destination Kafka cluster.

destVersion

This parameter is required for all scenarios.

The version of the destination Kafka cluster. Valid values: 1.0, 0.9, and 0.10.

Note

If the version of the Kafka cluster is 1.0 or later, set this parameter to 1.0.

destSSL

This parameter is required for all scenarios.

Specifies whether to encrypt the connection to the destination Kafka cluster. Valid values:

  • 0: does not encrypt the connection.

  • 3: encrypts the connection by using the SCRAM-SHA-256 algorithm.

sink.kafka.ddl.topic

You need to specify the topic that stores the DDL information.

The topic that stores the DDL information. If you do not specify this parameter, the DDL information is stored in the topic that is specified by destTopic.

kafkaRecordFormat

This parameter is required for all scenarios.

The storage format in which data is shipped to the destination Kafka cluster. Valid values:

  • canal_json: DTS uses Canal to parse the incremental logs of the source database and transfer the incremental data to the destination Kafka cluster in the Canal JSON format.

  • dts_avro: DTS uses the Avro format to transfer data. Avro is a data serialization format into which data structures or objects can be converted to facilitate storage and transmission.

  • shareplex_json: DTS uses the data replication software SharePlex to read the data in the source database and write the data to the destination Kafka cluster in the SharePlex JSON format.

  • debezium: DTS uses Debezium to transfer data. Debezium is a tool for capturing data changes. It streams data updates from the source database to the destination Kafka cluster in real time.

Note

For more information, see Data formats of a Kafka cluster.

destKafkaPartitionKey

This parameter is required for all scenarios.

The policy that is used to synchronize data to Kafka partitions. Valid values:

  • none: DTS synchronizes all data and DDL statements to Partition 0 of the destination topic.

  • database_table: DTS uses the database and table names as the partition key to calculate the hash value. Then, DTS synchronizes the data and DDL statements of each table to the corresponding partition of the destination topic.

  • columns: DTS uses a table column as the partition key to calculate the hash value. By default, the primary key is used as the partition key. If a table does not have a primary key, the unique key is used as the partition key. DTS synchronizes each row to the corresponding partition of the destination topic. You can specify one or more columns as partition keys to calculate the hash value.

destSchemaRegistry

This parameter is required for all scenarios.

Specifies whether to use Kafka Schema Registry. Valid values: yes and no.

destKafkaSchemaRegistryUrl

The value of destSchemaRegistry is set to true.

The URL or IP address of your Avro schema that is registered with Kafka Schema Registry.