This topic describes how to synchronize data from an ApsaraDB RDS for MySQL instance to an AnalyticDB for PostgreSQL instance in Serverless mode by using Data Transmission Service (DTS). The data synchronization feature provided by DTS allows you to transfer and analyze data with ease.
Prerequisites
An ApsaraDB RDS for MySQL instance is created. For more information, see Create an ApsaraDB RDS for MySQL instance.
An AnalyticDB for PostgreSQL instance of V1.0.3.1 or later in Serverless mode is created.
For information about how to create an AnalyticDB for PostgreSQL instance in Serverless mode, see Create an instance.
For information about how to view and update the minor version of an AnalyticDB for PostgreSQL instance, see View the minor engine version and Update the minor engine version.
Supported MySQL database types
Data from the following types of MySQL databases can be synchronized to AnalyticDB for PostgreSQL instances in Serverless mode. In this topic, an ApsaraDB RDS for MySQL instance is used as the source database to describe how to configure a data synchronization task. You can also follow the procedure to configure data synchronization tasks for other types of MySQL databases.
- ApsaraDB RDS for MySQL instance
- Self-managed database that is hosted on Elastic Compute Service (ECS)
- Self-managed database that is connected over Express Connect, VPN Gateway, or Smart Access Gateway
- Self-managed database that is connected over Database Gateway
- Self-managed database that is connected over Cloud Enterprise Network (CEN)
DTS can also synchronize data from PostgreSQL, SQL Server, and Db2 databases. For more information about supported databases, see Supported databases.
Usage notes
By default, DTS disables FOREIGN KEY constraints for the destination database in a data synchronization task. Therefore, the cascade and delete operations on the source database are not synchronized to the destination database.
Category | Description |
Limits on the source database |
|
Other limits |
|
Special cases |
|
Billing
Synchronization type | Task configuration fee |
Schema synchronization and full data synchronization | Free of charge. |
Incremental data synchronization | Charged. For more information, see Billing overview. |
Supported synchronization topologies
- One-way one-to-one synchronization
- One-way one-to-many synchronization
- One-way many-to-one synchronization
SQL operations that can be synchronized
- DML operations: INSERT, UPDATE, and DELETE
- DDL operation: ADD COLUMN Note The CREATE TABLE operation is not supported. To synchronize data from a new table, you must add the table to the selected objects. For more information, see Add an object to a data synchronization task.
Term mappings
MySQL | AnalyticDB for PostgreSQL |
Database | Schema |
Table | Table |
Procedure
Go to the Data Synchronization Tasks page of the new DTS console.
NoteYou can also log on to the DMS console. In the top navigation bar, click DTS. Then, in the left-side navigation pane, choose .
In the upper-left corner of the page, select the region where the data synchronization instance resides.
Click Create Task. On the Create Data Synchronization Task page, configure the source and destination databases.
Section
Parameter
Description
N/A
Task Name
The name of the task. DTS automatically assigns a name to the task. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.
Source Database
Select Existing Connection
Select an existing ApsaraDB RDS for MySQL instance. This parameter is optional.
Database Type
Select MySQL.
Access Method
Select Alibaba Cloud Instance.
Instance Region
The region where the source ApsaraDB RDS for MySQL instance resides.
Replicate Data Across Alibaba Cloud Accounts
Select No in this example.
RDS Instance ID
The ID of the source ApsaraDB RDS for MySQL instance.
Database Account
The database account of the source ApsaraDB RDS for MySQL instance. The account must have the REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, and SELECT permissions.
Database Password
The password of the database account.
Encryption
Select Non-encrypted or SSL-encrypted based on your requirements. If you select SSL-encrypted, you must enable SSL encryption for the ApsaraDB RDS for MySQL instance before you configure the data synchronization task. For more information, see Configure the SSL encryption feature.
Destination Database
Select Existing Connection
Select an existing AnalyticDB for PostgreSQL instance in Serverless mode. This parameter is optional.
Database Type
Select AnalyticDB for PostgreSQL.
Access Method
Select Alibaba Cloud Instance.
Instance Region
The region where the destination AnalyticDB for PostgreSQL instance in Serverless mode resides.
Instance ID
The ID of the destination AnalyticDB for PostgreSQL instance in Serverless mode.
Database Name
The name of the destination database in the AnalyticDB for PostgreSQL instance in Serverless mode.
Database Account
The initial account of the destination AnalyticDB for PostgreSQL instance in Serverless mode.
NoteYou can also enter an account that has the RDS_SUPERUSER permission. For more information, see Manage users and permissions.
Database Password
The password of the database account.
In the lower part of the page, click Test Connectivity and Proceed.
NoteDTS adds the CIDR blocks of DTS servers to the whitelist of the ApsaraDB RDS for MySQL instance. For more information, see the "CIDR blocks of DTS servers" section of the Add the CIDR blocks of DTS servers topic.
After your DTS task is completed or released, we recommend that you manually remove the added CIDR blocks from the whitelist.
Select objects for the task and configure advanced settings.
Parameter
Description
Task Stages
Incremental Data Synchronization is automatically selected. You must also select Schema Synchronization and Full Data Synchronization. After the precheck is complete, DTS synchronizes the historical data of selected objects from the source instance to the destination instance. The historical data serves as the basis for subsequent incremental synchronization.
Processing Mode of Conflicting Tables
Precheck and Report Errors: checks whether the destination database contains tables that use the same names as tables in the source database. If the source and destination databases do not contain identical table names, the precheck is passed. Otherwise, an error is returned during the precheck, and the data synchronization task cannot be started.
NoteIf the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are synchronized to the destination database. For more information, see Map object names.
Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
WarningIf you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks.
If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur:
During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.
During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.
If the source and destination databases have different schemas, data may fail to be initialized. In this case, only specific columns are synchronized or the data synchronization task fails.
DDL and DML Operations to Be Synchronized
The DDL and DML operations that you want to synchronize. For more information, see the "SQL operations that can be synchronized" section of this topic.
NoteTo select the SQL operations performed on a specific database or table, right-click an object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to synchronize.
Select objects
Select one or more objects from the Source Objects section and click the icon to add the objects to the Selected Objects section.
NoteYou can select only tables as the objects to be synchronized.
Rename databases and tables
To rename an object that you want to synchronize to the destination instance, right-click the object in the Selected Objects section. For more information, see the "Map the name of a single object" section of the Map object names topic.
To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see the "Map multiple object names at a time" section of the Map object names topic.
Filter data
You can specify WHERE conditions to filter data. For more information, see Set filter conditions.
Select the SQL operations to be synchronized
In the Selected Objects section, right-click an object. In the dialog box that appears, select the DML and DDL operations that you want to synchronize. For more information, see the "SQL operations that can be synchronized" section of this topic.
Click Next: Advanced Settings.
Parameter
Description
Monitoring and Alerting
Specify whether to configure monitoring and alerting for the data synchronization task. If the task fails or the synchronization latency exceeds the alert threshold, the alert contacts receive notifications.
Select No if you do not want to configure monitoring and alerting.
Select Yes to configure monitoring and alerting. In this case, you must also set the alert threshold.
Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database
If you use DMS to perform online DDL operations on the source database, you can specify whether to synchronize the data of temporary tables generated by online DDL operations.
Yes: DTS synchronizes the data of temporary tables generated by online DDL operations.
NoteIf online DDL operations generate a large amount of data, the data synchronization task may be delayed.
No: DTS does not synchronize the data of temporary tables generated by online DDL operations. Only the original DDL data of the source database is synchronized.
NoteIf you select No, the tables in the destination database may be locked.
Retry Time for Failed Connections
Specify the retry time range for failed connections. If a data synchronization task is disconnected, DTS immediately retries a connection within the specified time range. Valid values: 10 to 1440. Unit: minutes. Default value: 120. We recommend that you set the retry time range to more than 30 minutes. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.
NoteIf you specify different retry time ranges for multiple data synchronization tasks that have the same source or destination database, the shortest retry time range takes precedence.
If DTS retries a connection, you are charged for the data synchronization task. We recommend that you specify the retry time range based on your business requirements and release the data synchronization task at the earliest opportunity after the source and destination instances are released.
Enclose Object Names in Quotation Marks
Specify whether to enclose object names in quotation marks. If you select Yes and the following conditions are met, DTS encloses object names in single quotation marks (') or double quotation marks (") during schema synchronization and incremental data synchronization.
The business environment of the source database is case-sensitive but the object names of the database contain both uppercase and lowercase letters.
A source table name does not start with a letter and contains characters other than letters, digits, and specific special characters.
NoteA source table name can contain only the following special characters: underscores (_), number signs (#), and dollar signs ($).
The names of the schemas, tables, or columns that you want to synchronize are keywords of the destination database, reserved keywords, or invalid characters.
NoteIf you select Yes, after DTS synchronizes data to the destination database, you must specify the object name in quotation marks to query the object.
Configure ETL
Specify whether to configure the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:
Yes: configures the ETL feature. You can enter data processing statements in the code editor. For more information, see Configure ETL in a data migration or synchronization task.
No: does not configure the ETL feature.
In the lower part of the page, click Next: Configure Database and Table Fields. On the page that appears, set the primary key columns and distribution columns of the tables that you want to synchronize to the destination AnalyticDB for PostgreSQL instance.
Save the task settings and run a precheck.
To view the parameters to be specified when you call the relevant API operation to configure the DTS task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
If you do not need to view or have viewed the parameters, click Next: Save Task Settings and Precheck in the lower part of the page.
NoteBefore you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.
If the task fails to pass the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.
If an alert is triggered for an item during the precheck:
If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issue. Then, run a precheck again.
If an alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur and your business may be exposed to potential risks.
Wait until the success rate becomes 100%. Then, click Next: Purchase Instance.
On the Purchase Instance page, configure the Billing Method and Instance Class parameters for the data synchronization instance. The following table describes the parameters.
Section
Parameter
Description
New Instance Class
Billing Method
Subscription: You pay for your subscription when you create an instance. The subscription billing method is more cost-effective than the pay-as-you-go billing method for long-term use.
Pay-as-you-go: A pay-as-you-go instance is billed on an hourly basis. We recommend that you select the pay-as-you-go billing method for short-term use. If you no longer require a pay-as-you-go instance, you can release the instance to reduce costs.
Resource Group
The resource group to which the instance belongs. Default value: default resource group. For more information, see What is Resource Management?
Instance Class
DTS provides various synchronization specifications that provide different performance. The synchronization speed varies based on the synchronization specifications that you select. You can select a synchronization specification based on your business requirements. For more information, see Specifications of data synchronization instances.
Duration
If you select the subscription billing method, set the subscription duration and the number of instances that you want to create. The subscription duration can be one to nine months, one year, two years, three years, or five years.
NoteThis parameter is displayed only if you select the subscription billing method.Subscription
Read and select the Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start to start the data synchronization task. You can view the progress of the task in the task list.
FAQ
If an error is repeatedly reported during schema synchronization after you make sure that the table schema is consistent, Submit a ticket.
VACUUM operations are not automatically performed during data synchronization because they may affect subsequent data write speeds. We recommend that you periodically perform VACUUM operations on databases.
If an exception occurs during full data synchronization, you must clear the data in the destination table and rewrite data.
The data synchronization performance of AnalyticDB for PostgreSQL instances in Serverless mode is good in scenarios where a large amount of data is written from a single table, but poor in scenarios where a small amount of data is written from hot data rows or multiple tables. If your business involves the latter scenarios, we recommend that you Submit a ticket for parameter optimization and performance improvement.