All Products
Search
Document Center

AnalyticDB:Synchronize data from a self-managed SQL Server database hosted on ECS to an AnalyticDB for PostgreSQL instance

Last Updated:Sep 09, 2024

This topic describes how to synchronize data from a self-managed SQL Server database that is hosted on Elastic Compute Service (ECS) to an AnalyticDB for PostgreSQL instance by using Data Transmission Service (DTS).

Prerequisites

  • The version of the self-managed SQL Server database is 2008, 2008 R2, 2012, 2014, 2016, 2017, or 2019.

    Note

    If you deploy the SQL Server database in an Always On availability group (AOAG), you must use the synchronous-commit mode.

  • The tables to be synchronized from the self-managed SQL Server database have primary keys or UNIQUE NOT NULL indexes.

  • The available storage space of the AnalyticDB for PostgreSQL instance is larger than the total size of the data in the self-managed SQL Server database.

Usage notes

  • DTS uses read and write resources of the source and destination RDS instances during initial full data synchronization. This may increase the loads of the RDS instances. If the instance performance is unfavorable, the specification is low, or the data volume is large, database services may become unavailable. For example, DTS occupies a large amount of read and write resources in the following cases: a large number of slow SQL queries are performed on the source RDS instance, the tables have no primary keys, or a deadlock occurs in the destination RDS instance. Before data synchronization, evaluate the impact of data synchronization on the performance of the source and destination RDS instances. We recommend that you synchronize data during off-peak hours. For example, you can synchronize data when the CPU utilization of the source and destination RDS instances is less than 30%.

  • To ensure that the data synchronization task runs as expected, do not frequently back up the source database. We recommend that you retain log files for more than three days. Otherwise, you cannot retrieve log files after they are truncated.

  • To ensure that the latency of data synchronization is accurate, DTS adds a heartbeat table to the self-managed SQL Server database. The name of the heartbeat table is Source table name_dts_mysql_heartbeat.

Billing

Synchronization typeTask configuration fee
Schema synchronization and full data synchronizationFree of charge.
Incremental data synchronizationCharged. For more information, see Billing overview.

Limits

  • DTS does not synchronize the schemas of assemblies, service brokers, full-text indexes, full-text catalogs, distributed schemas, distributed functions, CLR stored procedures, CLR scalar-valued functions, CLR table-valued functions, internal tables, systems, or aggregate functions.

  • DTS does not synchronize data of the following types: TIMESTAMP, CURSOR, ROWVERSION, HIERARCHYID, SQL_VARIANT, SPATIAL GEOMETRY, SPATIAL GEOGRAPHY, and TABLE.

  • DTS does not synchronize tables that contain computed columns.

SQL operations that can be synchronized

  • DML operations: INSERT, UPDATE, and DELETE

  • DDL operation: ADD COLUMN

    Note

    DTS does not migrate transactional DDL operations.

Permissions required for database accounts

Database

Required permission

References

Self-managed SQL Server database

sysadmin

AnalyticDB for PostgreSQL instance

  • LOGIN permission

  • SELECT, CREATE, INSERT, UPDATE, and DELETE permissions on the destination tables

  • CONNECT and CREATE permissions on the destination database

  • CREATE permission on the destination schemas

  • COPY permission (the permission to perform memory-based batch copy operations)

Note

You can use the initial account of the AnalyticDB for PostgreSQL instance.

Preparations

Before you configure a data synchronization task, configure log settings and create clustered indexes on the self-managed SQL Server database.

  1. Run the following command on the self-managed SQL Server database to change the recovery model to full. You can also change the recovery model by using SQL Server Management Studio (SSMS). For more information, see View or Change the Recovery Model of a Database (SQL Server).

    use master;
    GO
    ALTER DATABASE <database_name> SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
    GO

    Parameters:

    <database_name>: the name of the source database.

    Example:

    use master;
    GO
    ALTER DATABASE mytestdata SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
    GO
  2. Run the following command to create a logical backup for the source database. Skip this step if you have already created a logical backup.

    BACKUP DATABASE <database_name> TO DISK='<physical_backup_device_name>';
    GO

    Parameters:

    • <database_name>: the name of the source database.

    • <physical_backup_device_name>: the storage path and file name of the backup file.

    Example:

    BACKUP DATABASE mytestdata TO DISK='D:\backup\dbdata.bak';
    GO
  3. Run the following command to back up the log entries of the source database:

    BACKUP LOG <database_name> to DISK='<physical_backup_device_name>' WITH init;
    GO

    Parameters:

    • <database_name>: the name of the source database.

    • <physical_backup_device_name>: the storage path and file name of the backup file.

    Example:

    BACKUP LOG mytestdata TO DISK='D:\backup\dblog.bak' WITH init;
    GO
  4. Create clustered indexes for the tables that you want to synchronize. For more information, see Create Clustered Indexes.

Procedure

  1. Purchase a data synchronization instance. For more information, see Purchase a DTS instance.

    Note

    On the buy page, set the Source Instance parameter to SQL Server, the Destination Instance parameter to AnalyticDB for PostgreSQL, and the Synchronization Topology parameter to One-way Synchronization.

  2. Log on to the DTS console.

  3. In the left-side navigation pane, click Data Synchronization.

  4. At the top of the Synchronization Tasks page, select the region where the destination instance resides.

  5. Find the data synchronization instance and click Configure Synchronization Channel in the Actions column.

  6. Configure the source and destination instances.

    Configure the source and destination instances

    Section

    Parameter

    Description

    N/A

    Synchronization Task Name

    The task name that DTS automatically generates. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to use a unique task name.

    Source Instance Details

    Instance Type

    The instance type of the source database. In this example, User-Created Database in ECS Instance is selected.

    Note

    If you select other instance types, you must prepare the environment that is required for the source database. For more information, see Preparation overview.

    Instance Region

    The source region that you selected on the buy page. You cannot change the value of this parameter.

    ECS Instance ID

    The ID of the Elastic Compute Service (ECS) instance that hosts the source database.

    Database Type

    The value of this parameter is fixed to SQLServer and cannot be changed.

    Port Number

    The service port number of the source database. The default port number is 3306.

    Database Account

    The account of the source database. For information about the permissions that are required for the account, see Permissions required for database accounts.

    Database Password

    The password of the database account.

    Encryption

    Specifies whether to encrypt the connection to the source instance. Select Non-encrypted or SSL-encrypted.

    Note

    The Encryption parameter is available only within regions in the Chinese mainland and the China (Hong Kong) region.

    Destination Instance Details

    Instance Type

    The value of this parameter is fixed to AnalyticDB for PostgreSQL.

    Instance Region

    The destination region that you selected on the buy page. You cannot change the value of this parameter.

    Instance ID

    The ID of the destination AnalyticDB for PostgreSQL instance.

    Database Name

    The name of the destination database.

    Database Account

    The database account of the destination AnalyticDB for PostgreSQL instance. For information about the permissions that are required for the account, see Permissions required for database accounts.

    Database Password

    The password of the database account.

  7. In the lower-right corner of the page, click Set Whitelist and Next.

    If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the self-managed database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the IP address whitelist of the database to allow DTS to access the database. For more information, see Add the CIDR blocks of DTS servers.

    Warning

    If the CIDR blocks of DTS servers are automatically or manually added to the whitelist of the database or instance, or to the ECS security group rules, security risks may arise. Therefore, before you use DTS to synchronize data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or ECS security group rules and forbidding unauthorized CIDR blocks, or connecting the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.

  8. Select the synchronization policy and the objects to be synchronized.

    Select the synchronization policy and the objects to be synchronized

    Setting

    Description

    Initialize Synchronization

    Initial Schema Synchronization, Initial Full Data Synchronization, and Initial Incremental Data Synchronization are selected by default. After the precheck is complete, DTS synchronizes the schemas and data of objects from the source instance to the destination instance. The schemas and data are the basis for subsequent incremental synchronization.

    Processing Mode In Existed Target Table

    • Pre-check and Intercept: checks whether the destination database contains tables that have the same names as tables in the source database. If the destination database does not contain tables that have the same names as tables in the source database, the precheck is passed. Otherwise, an error is returned during precheck and the data synchronization task cannot be started.

      Note

      If the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are synchronized to the destination database. For more information, see Rename an object to be synchronized.

    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks.

      • If the source and destination databases have the same schema, DTS does not synchronize data records that have the same primary keys as data records in the destination database.

      • If the source and destination databases have different schemas, initial data synchronization may fail. In this case, only part of columns are synchronized, or the data synchronization task fails.

    Merge Multi Tables

    • Yes: In online transaction processing (OLTP) scenarios, sharding is implemented to speed up the response to business tables. However, AnalyticDB for PostgreSQL allows you to store a large amount of data in a single table and makes your SQL queries more efficient. You can merge multiple source tables that have the same schema into a single destination table. This feature allows you to synchronize data from multiple tables in the source database to a single table in AnalyticDB for PostgreSQL.

      Note
      • After you select multiple tables from the source database, you must change the names of these tables to the name of the destination table in AnalyticDB for PostgreSQL. To do this, you can use the object name mapping feature. For more information about how to use this feature, see Rename an object to be synchronized.

      • You must add a column named __dts_data_source to the destination table in AnalyticDB for PostgreSQL. This column is used to record the data source. The data type of this column is TEXT. DTS writes column values in the following format: <Data synchronization instance ID>:<Source database name>.<Source schema name>.<Source table name>. Such column values allow DTS to identify each source table. For example, dts********:dtstestdata.testschema.customer1 indicates that the source table is customer1.

      • If you set this parameter to Yes, all the selected source tables in the task are merged into a destination table. If you do not need to merge specific source tables, you can create a separate data synchronization task for these tables.

    • No: the default value.

    Select the operation types

    Select the types of operations that you want to synchronize based on your business requirements. All operation types are selected by default.

    Select the objects to be synchronized

    Select one or more objects from the Available section and click the Rightwards arrow icon to add the objects to the Selected section.

    In this scenario, data synchronization is performed between heterogeneous databases. Therefore, the objects to synchronize are tables, and other objects such as views, triggers, and stored procedures are not synchronized to the destination database.

    Note
    • By default, after an object is synchronized to the destination instance, the name of the object remains unchanged. You can use the object name mapping feature to rename the objects that are synchronized to the destination instance. For more information, see Rename an object to be synchronized.

    • If you set the Merge Multi Tables parameter to Yes, you must change the names of the selected tables to the name of the destination table in the AnalyticDB for PostgreSQL instance. To do this, you can use the object name mapping feature.

    Add quotation marks to the target object

    Specify whether you need to enclose object names in quotation marks. If you select Yes and the following conditions are met, DTS encloses object names in single quotation marks (') or double quotation marks (") during schema synchronization and incremental data synchronization.

    • The business environment of the source database is case-sensitive, and the database name contains both uppercase and lowercase letters.

    • A source table name does not start with a letter and contains characters other than letters, digits, and special characters.

      Note

      A source table name can contain only the following special characters: underscores (_), number signs (#), and dollar signs ($).

    • The names of the schemas, tables, or columns that you want to synchronize are keywords, reserved keywords, or invalid characters in the destination database.

    Note

    If you select Yes, after DTS synchronizes data to the destination database, you must specify the object name in quotation marks to query the object.

    Rename Databases and Tables

    You can use the object name mapping feature to rename the objects that are synchronized to the destination instance. For more information, see Object name mapping.

    Retry Time for Failed Connections

    By default, if DTS fails to connect to the source or destination database, DTS retries within the next 720 minutes (12 hours). You can specify the retry time based on your needs. If DTS reconnects to the source and destination databases within the specified time, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.

    Note

    When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time based on your business needs. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.

  9. Specify the table type, primary key column, and distribution key of the tables that you want to synchronize to the AnalyticDB for PostgreSQL instance.

    Specify the table type, primary key column, and distribution key

    Note

    For more information about primary key columns and distribution columns, see Manage tables and Define table distribution.

  10. In the lower-right corner of the page, click Precheck.

    Note
    • Before you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.

    • If the task fails to pass the precheck, you can click the 提示 icon next to each failed item to view details.

      • After you troubleshoot the issues based on the details, initiate a new precheck.

      • If you do not need to troubleshoot the issues, ignore the failed items and initiate a new precheck.

  11. Close the Precheck dialog box after the following message is displayed: The precheck is passed. Then, the data synchronization task starts.

  12. Wait until the initial synchronization is complete and the data synchronization task is in the Synchronizing state.

    You can view the status of the data synchronization task on the Synchronization Tasks page. View the status of a data synchronization task