All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB for PostgreSQL cluster to an ApsaraDB RDS for PostgreSQL instance

Last Updated:Sep 09, 2024

This topic describes how to migrate data from a PolarDB for PostgreSQL cluster to an ApsaraDB RDS for PostgreSQL instance by using Data Transmission Service (DTS).

Prerequisites

  • The source PolarDB for PostgreSQL cluster is created. For more information, see Create a cluster.

  • The destination ApsaraDB RDS for PostgreSQL instance is created. For more information, see Create an instance.

  • The wal_level parameter is set to logical for the source PolarDB for PostgreSQL cluster. For more information, see Specify cluster parameters.

  • The available storage space of the destination database is larger than the total size of the data in the source database.

Usage notes

Note
  • During schema migration, DTS migrates foreign keys from the source database to the destination database.

  • During full data migration and incremental data migration, DTS temporarily disables the constraint check and cascade operations on foreign keys at the session level. If you perform the cascade update and delete operations on the source database during data migration, data inconsistency may occur.

Category

Description

Limits on the source database

  • The tables to be migrated from the source PolarDB for PostgreSQL cluster must contain primary keys or UNIQUE NOT NULL indexes.

  • If the source database has long-running transactions and the data migration task contains incremental data migration, the write-ahead logging (WAL) logs that are generated before the long-running transactions are submitted may not be cleared and therefore pile up, resulting in insufficient storage space in the source database.

  • If you want to perform a primary/secondary switchover on the source PolarDB for PostgreSQL cluster, the logical replication slot failover feature must be enabled. This prevents logical subscriptions from being interrupted and ensures that your data migration task can run as expected. For more information, see Logical replication slot failover.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

Other limits

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is complete, the size of used tablespace of the destination database is larger than that of the source database.

  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads to the destination database, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the data migration task is resumed.

  • DTS does not check the validity of metadata such as sequences. You must manually check the validity of metadata.

  • During incremental data migration, DTS creates a replication slot in the source database to replicate data. The replication slot is prefixed with dts_sync_. DTS can obtain the incremental logs of the source database within the last 15 minutes by using this replication slot.

    Note
    • After the DTS instance is released, the replication slot is automatically deleted. If you change the password of the source database or delete the IP addresses of DTS from the IP address whitelist, the replication slot cannot be automatically deleted. To ensure the availability of the source database, you must manually delete the replication slot in the source database to prevent the replication slot from piling up.

    • If the data migration task is released or fails, DTS automatically deletes the replication slot. If a primary/secondary switchover is performed on the source database, you must log on to the secondary database to manually delete the replication slot.

  • After your workloads are switched to the destination database, newly written sequences do not increment from the maximum value of the sequences in the source database. Therefore, you must query the maximum value of the sequences in the source database before you switch your workloads to the destination database. Then, you must specify the queried maximum value as the starting value of the sequences in the destination database. You can execute the following statements to query the maximum value of the sequences in the source database:

    do language plpgsql $$
    declare
      nsp name;
      rel name;
      val int8;
    begin
      for nsp,rel in select nspname,relname from pg_class t2 , pg_namespace t3 where t2.relnamespace=t3.oid and t2.relkind='S'
      loop
        execute format($_$select last_value from %I.%I$_$, nsp, rel) into val;
        raise notice '%',
        format($_$select setval('%I.%I'::regclass, %s);$_$, nsp, rel, val+1);
      end loop;
    end;
    $$;
  • If you run a full or incremental data migration task, the tables to be migrated from the source database contain foreign keys, triggers, or event triggers, and the account of the destination database is a privileged account or an account that has the permissions of the superuser role, DTS temporarily sets the session_replication_role parameter to replica at the session level during full or incremental data migration. If the account of the destination database does not have the permissions, you must manually set the session_replication_role parameter to replica in the destination database. After the session_replication_role parameter is set to replica during full or incremental data migration, if a cascade update or delete operation is performed in the source database, data inconsistency may occur. After the data migration task is released, you can change the value of the session_replication_role parameter to origin.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration types

  • Schema migration

    DTS migrates the schemas of the selected objects from the source database to the destination database.

  • Full data migration

    DTS migrates the historical data of required objects from the source database to the destination database.

  • Incremental data migration

    After full data migration is completed, DTS migrates incremental data from the source database to the destination database. Incremental data migration allows data to be migrated smoothly without interrupting the services of self-managed applications during data migration.

SQL operations that support incremental migration

Operation type

SQL statement

DML

INSERT, UPDATE, and DELETE

DDL

  • DDL operations can be migrated only in the data migration tasks that are created after October 1, 2020.

    Important
  • If the database account of the source database is a privileged account, the following DDL statements can be migrated in data migration tasks:

    • CREATE TABLE and DROP TABLE

    • ALTER TABLE, including RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, and ALTER COLUMN DROP DEFAULT

    • TRUNCATE TABLE (The database engine version of the source PolarDB for PostgreSQL cluster must be PostgreSQL V11 or later.)

    • CREATE INDEX ON TABLE

    Important
    • You cannot migrate additional information of DDL statements, such as CASCADE or RESTRICT.

    • You cannot migrate the DDL statements from a session in which the SET session_replication_role = replica statement is executed.

    • DDL statements that are executed by calling methods such as FUNCTION cannot be migrated.

    • If the SQL statements submitted by the source database at a time contain both DML and DDL statements, DTS does not migrate the DDL statements.

    • If the SQL statements submitted by the source database at a time contain DDL statements that are not to be migrated, DTS does not migrate the DDL statements.

Permissions required for database accounts

Database type

Schema migration

Full data migration

Incremental data migration

References

PolarDB for PostgreSQL cluster

Permissions of a privileged account

Create a database account

RDS PostgreSQL

CREATE and USAGE permissions on the objects to be migrated

Permissions of the schema owner

Permissions of the schema owner

Create an account

Procedure

  1. Go to the Data Migration Tasks page.

    1. Log on to the Data Management (DMS) console.

    2. In the top navigation bar, move the pointer over DTS.

    3. Choose DTS (DTS) > Data Migration.

    Note
  2. From the drop-down list on the right side of Data Migration Tasks, select the region in which your data migration instance resides.

    Note

    If you use the new DTS console, you must select the region in which the data migration instance resides in the upper-left corner.

  3. Click Create Task. On the Create Data Migration Task page, configure the source and destination databases. The following table describes the parameters.

    Section

    Parameter

    Description

    N/A

    Task Name

    The name of the DTS task. DTS automatically generates a task name. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source Database

    Database Type

    The type of the source database. Select PolarDB for PostgreSQL.

    Access Method

    The access method of the source database. Select Alibaba Cloud Instance.

    Instance Region

    The region in which the source PolarDB for PostgreSQL cluster resides.

    Replicate Data Across Alibaba Cloud Accounts

    Specifies whether to migrate data across Alibaba Cloud accounts. In this example, No is selected.

    Instance ID

    The ID of the source PolarDB for PostgreSQL cluster.

    Database Name

    The name of the source database in the PolarDB for PostgreSQL cluster.

    Database Account

    The database account of the PolarDB for PostgreSQL cluster.

    Database Password

    The password that is used to access the database instance.

    Destination Database

    Database Type

    The type of the destination database. Select PostgreSQL.

    Access Method

    The access method of the destination database. Select Alibaba Cloud Instance.

    Instance Region

    The region in which the destination ApsaraDB RDS for PostgreSQL instance resides.

    Instance ID

    The ID of the destination ApsaraDB RDS for PostgreSQL instance.

    Database Name

    The name of the destination database in the ApsaraDB RDS for PostgreSQL instance.

    Database Account

    The database account of the destination ApsaraDB RDS for PostgreSQL instance.

    Database Password

    The password that is used to access the database instance.

    Encryption

    Specifies whether to encrypt the connection to the source database. You can configure this parameter based on your business requirements. In this example, Non-encrypted is selected.

    If you want to establish an SSL-encrypted connection to the source database, perform the following steps: Select SSL-encrypted, upload CA Certificate, Client Certificate, and Private Key of Client Certificate as needed, and then specify Private Key Password of Client Certificate.

    Note
    • If you set Encryption to SSL-encrypted for a self-managed PostgreSQL database, you must upload CA Certificate.

    • If you want to use the client certificate, you must upload Client Certificate and Private Key of Client Certificate and specify Private Key Password of Client Certificate.

    • For information about how to configure SSL encryption for an ApsaraDB RDS for PostgreSQL instance, see SSL encryption.

  4. In the lower part of the page, click Test Connectivity and Proceed.

    If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the self-managed database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the IP address whitelist of the database to allow DTS to access the database. For more information, see the CIDR blocks of DTS servers section of the Add the CIDR blocks of DTS servers topic.

    Warning

    If the public CIDR blocks of DTS servers are automatically or manually added to the whitelist of a database instance or to the security group rules of an ECS instance, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or security group rules and forbidding unauthorized CIDR blocks, or connecting the database instance to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.

  5. Configure the objects to be migrated and advanced settings.

    Parameter

    Description

    Migration Types

    • To perform only full data migration, select Schema Migration and Full Data Migration.

    • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.

    Note

    If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.

    Processing Mode of Conflicting Tables

    • Precheck and Report Errors: checks whether the destination database contains tables that use the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.

      Note

      If the source and destination databases contain tables with identical names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are migrated to the destination database. For more information, see Map object names.

    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to the following potential risks:

      • If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur:

        • During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained.

        • During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.

      • If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.

    Capitalization of Object Names in Destination Instance

    The capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to make sure that the capitalization of object names is consistent with that of the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.

    Source Objects

    Select one or more objects from the Source Objects section. Click the 向右小箭头 icon to add the objects to the Selected Objects section.

    Selected Objects

    • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see the Map the name of a single object section of the Map object names topic.

    • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see the Map multiple object names at a time section of the Map object names topic.

    Note

    If you use the object name mapping feature to rename an object, other objects that depend on the object may fail to be migrated.

  6. Click Next: Advanced Settings to configure advanced settings.

    Parameter

    Description

    Dedicated Cluster for Task Scheduling

    By default, DTS schedules the data migration task to the shared cluster if you do not specify a dedicated cluster. If you want to improve the stability of data migration tasks, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster.

    Retry Time for Failed Connections

    The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the retry time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS is reconnected to the source and destination databases within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

    Note
    • If you specify different retry time ranges for multiple data migration tasks that share the same source or destination database, the value that is specified later takes precedence.

    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at the earliest opportunity after the source database and destination instance are released.

    Retry Time for Other Issues

    The retry time range for other issues. For example, if DDL or DML operations fail to be performed after the data migration task is started, DTS immediately retries the operations within the retry time range. Valid values: 1 to 1440. Unit: minutes. Default value: 10. We recommend that you set the parameter to a value greater than 10. If the failed operations are successfully performed within the specified retry time range, DTS resumes the data migration task. Otherwise, the change tracking task fails.

    Important

    The value of the Retry Time for Other Issues parameter must be smaller than the value of the Retry Time for Failed Connections parameter.

    Enable Throttling for Full Data Migration

    Specifies whether to enable throttling for full data migration. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of the database servers. You can enable throttling for full data migration based on your business requirements. To configure throttling, you must configure the Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) parameters. This reduces the loads of the destination database server.

    Note

    You can configure this parameter only if you select Full Data Migration for the Migration Types parameter.

    Enable Throttling for Incremental Data Migration

    Specifies whether to enable throttling for incremental data migration. To configure throttling, you must configure the RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) parameters. This reduces the loads of the destination database server.

    Note

    You can configure this parameter only if you select Incremental Data Migration for the Migration Types parameter.

    Environment Tag

    The environment tag that is used to identify the DTS instance. You can select an environment tag based on your business requirements. In this example, you do not need to configure this parameter.

    Configure ETL

    Specifies whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

    Monitoring and Alerting

    Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts receive notifications. Valid values:

  7. Save the task settings and run a precheck.

    • To view the parameters to be specified when you call the relevant API operation to configure the DTS task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

    • If you do not need to view or have viewed the parameters, click Next: Save Task Settings and Precheck in the lower part of the page.

    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.

    • If the task fails to pass the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.

    • If an alert is triggered for an item during the precheck:

      • If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.

      • If the alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.

  8. Wait until the Success Rate becomes 100%. Then, click Next: Purchase Instance.

  9. On the Purchase Instance page, configure the Instance Class parameter for the data migration instance. The following table describes the parameters.

    Section

    Parameter

    Description

    New Instance Class

    Resource Group Settings

    The resource group to which the data migration instance belongs. Default value: default resource group. For more information, see What is Resource Management?

    Instance Class

    DTS provides instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Specifications of data migration instances.

  10. Read and select the Data Transmission Service (Pay-as-you-go) Service Terms.

  11. Click Buy and Start. In the dialog box that appears, click OK.

    You can view the progress of the task in the task list.