This topic describes how to use the data transmission service to migrate data from a PolarDB-X 1.0 database to a MySQL tenant of OceanBase Database.
A data migration project remaining in an inactive state for a long time may fail to be resumed depending on the retention period of incremental logs. Inactive states include Failed, Paused, and Completed. The data transmission service automatically releases data migration projects that remain in an inactive state for more than 7 days to recycle resources. We recommend that you configure alerting for projects and handle project exceptions in a timely manner.
Background
PolarDB-X 1.0 is a cloud-native distributed database developed in house by Alibaba Group. It is integrated with a distributed SQL engine and an exclusively developed distributed storage X-DB and is designed based on the cloud-native integrated architecture. PolarDB-X 1.0 supports over ten million concurrent requests and provides a large storage capacity for hundreds of petabytes of data. For more information, see Product overview.
After the project that migrates data from a PolarDB-X 1.0 database to a MySQL tenant of OceanBase Database is successfully started, the project will be automatically deleted. The data transmission service automatically creates projects to migrate data from the MySQL databases mounted to the PolarDB-X 1.0 database to the MySQL tenant of OceanBase Database. The number of projects depends on the number of underlying MySQL instances in the PolarDB-X 1.0 database.
We recommend that you filter the projects by tag or project name for batch start, batch pause, batch start forward switchover, and more operations. For more information about batch operations, see Perform batch operations on data migration projects.
Prerequisites
The data transmission service has the privilege to access cloud resources. For more information, see Grant privileges to roles for data transmission.
You have created dedicated database users for data migration in the source PolarDB-X 1.0 database and the destination MySQL tenant of OceanBase Database and granted the corresponding privileges to the users.
Limitations
Limitations on the source database
Do not perform DDL operations for database or schema changes during full data migration. Otherwise, the data migration project may be interrupted.
The data transmission service supports PolarDB-X 1.0 databases of versions 5.2.8, 5.4.2, 5.4.9, and 5.4.12.
The data transmission service supports MySQL databases of versions 5.5, 5.6, 5.7, and 8.0 that are compatible with the ApsaraDB RDS for MySQL instances mounted to PolarDB-X 1.0 databases, as well as standard ApsaraDB RDS for MySQL and PolarDB for MySQL instances.
The data transmission service supports the migration of an object only when the following conditions are met: the database name, table name, and column name of the object are ASCII-encoded without special characters. The special characters are line breaks, spaces, and the following characters: . | " ' ` ( ) = ; / & \.
When you migrate data from a PolarDB-X 1.0 database to a MySQL tenant of OceanBase Database, the data transmission service does not support the following cases:
Schema migration or reverse incremental migration
Migration across Alibaba Cloud accounts
View migration
Inconsistency of the username or password of the ApsaraDB RDS for MySQL instance mounted to the source PolarDB-X 1.0 database
OceanBase Database supports the UTF8MB4, GBK, GB18030, binary, and UTF-16 character sets.
Considerations
For the migration of tables without unique keys (tables with primary keys or NOT NULL unique keys), when you restart or resume full migration, the data transmission service automatically truncates the destination tables that have been synchronized before the restart or resumption. However, for the migration of tables without unique keys in a data migration project from an ApsaraDB RDS for MySQL database mounted to the PolarDB-X 1.0 database to a MySQL tenant of OceanBase Database, the data transmission service does not automatically truncate the destination tables when you restart or resume full migration.
If you do not specify mappings for objects of the PolarDB-X 1.0 database, all data of physical tables is synchronized to physical tables with the same names at the destination. The number of physical tables at the source is the same as that at the destination.
A difference between the source and destination table schemas may result in data consistency. Some known scenarios are described as follows:
When you manually create a table schema in the destination, if the data type of any column is not supported by the data transmission service, implicit data type conversion may occur in the destination, which causes inconsistent column types between the source and destination databases.
If the length of a column at the destination is shorter than that at the source database, the data of this column may be automatically truncated, which causes data inconsistency between the source and destination databases.
If you have selected only Incremental Synchronization when you created the data migration project, the data transmission service requires that the local incremental logs of the source database be retained for more than 48 hours.
If you have selected Full Migration and Incremental Synchronization when you created the data migration project, the data transmission service requires that the local incremental logs of the source database be retained for at least 7 days. If the data transmission service cannot obtain incremental logs, the data migration project may fail or even the data between the source and destination databases may be inconsistent after migration.
Supported source and destination instance types
In the following table, OB_MySQL stands for the MySQL tenant of OceanBase Database.
Source | Destination |
PolarDB-X 1.0 (Alibaba Cloud PolarDB-X 1.0 instance) | OB_MySQL (OceanBase cluster instance) |
Procedure
Log on to the ApsaraDB for OceanBase console and purchase a data migration project.
For more information, see Purchase a data migration project.
Choose Data Transmission > Data Migration. On the page that appears, click Configuration for the data migration project.
If you want to reference the configurations of an existing project, click Reference Configuration. For more information, see Reference and clear the configuration of a data migration project.
On the Select Source and Destination page, configure the parameters.
Parameter
Description
Migration Project Name
We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length.
Tag
Click the field and select a target tag from the drop-down list. You can also click Manage Tags to create, modify, and delete tags. For more information, see Use tags to manage data migration projects.
NoteAfter the project that migrates data from a PolarDB-X 1.0 database to a MySQL tenant of OceanBase Database is successfully started, the project will be automatically deleted. You need to add a proper tag to the project.
Source
If you have created a PolarDB-X 1.0 data source, select it from the drop-down list. If not, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information about the parameters, see Create a PolarDB-X 1.0 data source.
Destination
If you have created a data source for the MySQL tenant of OceanBase Database, select it from the drop-down list. If not, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information about the parameters, see Create an OceanBase data source.
ImportantThe destination data source can only be an OceanBase cluster instance.
Click Next. In the dialog box that appears, click OK.
Note that this project supports only tables and views with a primary key or a non-null unique index. Other tables and views are automatically filtered out.
On the Select Migration Type page, specify migration types for the current data migration project.
Supported migration types are full migration, incremental synchronization, and full verification.
Migration type
Description
Full migration
After a full migration task is started, the data transmission service migrates existing data of tables in the source database to corresponding tables in the destination database.
Before data migration, assess the performance of the source and destination databases. We recommend that you perform data migration in off-peak hours. During full migration, the data transmission service consumes some read and write resources in the source and destination databases. This may increase the loads of the databases. For more information, see Performance assessment of migration assessment.
Incremental synchronization
After an incremental synchronization task is started, the data transmission service synchronizes changed data (data that is added, modified, or removed) from the source database to corresponding tables in the destination database.
Incremental synchronization supports the following DML operations:
Insert
,Delete
, andUpdate
. You can select statements based on your business needs. For more information, see Configure DDL/DML synchronization.Full verification
After the full migration and incremental synchronization tasks are completed, the data transmission service automatically initiates a full verification task to verify the tables in the source and destination databases.
If you have selected Incremental Synchronization but did not select all DML statements in the DML Synchronization section, the data transmission service does not support full verification.
Before data migration, assess the performance of the source and destination databases. We recommend that you perform data migration in off-peak hours. During full verification, the data transmission service consumes some read resources in the source and destination databases. This may increase the loads of the databases.
Click Next. On the Select Migration Objects page, specify the migration objects for the migration project.
At present, you can select migration objects only by using the Specify Objects option. Select the objects to be migrated on the left, and click > to add them to the list on the right. You can select tables of one or more databases as the migration objects.
The data transmission service allows you to import objects from text files, rename destination objects, set row filters, view column information, and remove a single migration object or all migration objects.
Operation
Description
Import objects
In the list on the right, click Import Objects in the upper-right corner.
In the dialog box that appears, click OK.
ImportantThis operation will overwrite previous selections. Proceed with caution.
In the Import Objects dialog box, import the objects to be migrated.
You can import CSV files to rename databases or tables and set row filtering conditions. For more information, see Download and import the settings of migration objects.
Click Validate.
After you import the migration objects, check their validity. Column field mapping is not supported at present.
After the validation succeeds, click OK.
Rename an object
The data transmission service allows you to rename migration objects. For more information, see Rename a database table.
Configure settings
The data transmission service allows you to filter rows by using
WHERE
conditions. For more information, see Use SQL conditions to filter data.You can also view column information of the migration objects in the View Columns section.
Remove one or all objects
The data transmission service allows you to remove a single object or all migration objects that are added to the right-side list during data mapping.
Remove a single migration object
In the list on the right, move the pointer over the object that you want to remove, and click Remove to remove the migration object.
Remove all migration objects
In the list on the right, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all migration objects.
Click Next. On the Migration Options page, configure the parameters.
Full migration
The following table describes the parameters for full migration, which are displayed only if you have selected Full Data Migration on the Select Migration Type page.
Parameter
Description
Read Concurrency Configuration
The concurrency for reading data from the source during full migration. The maximum value is 512. A high read concurrency may incur excessive stress on the source, affecting the business.
Write Concurrency Configuration
The concurrency for writing data to the destination during full migration. The maximum value is 512. A high write concurrency may incur excessive stress on the destination, affecting the business.
Full Data Migration Rate Limit
You can choose whether to limit the full migration rate as needed. If you choose to limit the full migration rate, you must specify the records per second (RPS) and bytes per second (BPS). The RPS specifies the maximum number of data rows migrated to the destination per second during full migration, and the BPS specifies the maximum amount of data in bytes migrated to the destination per second during full migration.
NoteThe RPS and BPS values specified here are only for throttling. The actual full migration performance is subject to factors such as the settings of the source and destination and the instance specifications.
Processing Strategy When Destination Table Has Records
Valid values: Ignore and Stop Migration.
If you select Ignore, when the data to be inserted conflicts with existing data of a destination table, the data transmission service logs the conflicting data while retaining the existing data.
ImportantIf you select Ignore, data is pulled in IN mode during full verification. In this case, verification is inapplicable if the destination contains data that does not exist in the source, and the verification performance is downgraded.
If you select Stop Migration and a destination table contains records, an error prompting migration unsupported is reported during full migration. In this case, you must process the data in the destination table and then continue with the migration.
ImportantIf you click Resume in the dialog box prompting the error, the data transmission service ignores this error and continues to migrate data. Proceed with caution.
Incremental synchronization
The following table describes the parameters for incremental synchronization, which are displayed only if you have selected Incremental Synchronization on the Select Migration Type page.
Parameter
Description
Write Concurrency Configuration
The concurrency for writing data to the destination during incremental synchronization. The maximum value is 512. A high write concurrency may incur excessive stress on the destination, affecting the business.
Incremental Synchronization Rate Limit
You can choose whether to limit the incremental synchronization rate as needed. If you choose to limit the incremental synchronization rate, you must specify the RPS and BPS. The RPS specifies the maximum number of data rows synchronized to the destination per second during incremental synchronization, and the BPS specifies the maximum amount of data in bytes synchronized to the destination per second during incremental synchronization.
NoteThe RPS and BPS values specified here are only for throttling. The actual incremental synchronization performance is subject to factors such as the settings of the source and destination and the instance specifications.
Incremental Synchronization Start Timestamp
If you have set the migration type to Full Data Migration, this parameter is not displayed.
If you have selected Incremental Synchronization but not Full Data Migration, specify a point in time after which the data is to be synchronized. The default value is the current system time. For more information, see Set an incremental synchronization timestamp.
Click Precheck. Then, the system performs a precheck on the data migration project.
During the precheck, the data transmission service checks the read and write privileges of the database users and the network connections of the databases. The data migration project can be started only after it passes all check items. If an error is returned during the precheck, you can perform the following operations:
Identify and troubleshoot the problem and then perform the precheck again.
Click Skip in the Actions column of the failed precheck item. In the dialog box that prompts the consequences of the operation, click OK.
After the precheck succeeds, click Start Project.
If you do not need to start the project now, click Save. After that, you can only manually start the project or start it in a batch operation on the Migration Projects page. For more information about batch operations, see Perform batch operations on data migration projects.
After the project is started, the project for data migration from the PolarDB-X 1.0 database to the MySQL tenant of OceanBase Database is automatically deleted. The data transmission service retains the projects for data migration from the databases mounted to the PolarDB-X 1.0 database to the MySQL tenant of OceanBase Database and automatically creates the corresponding data sources. In the dialog box that appears, you can click Download as file to save the related information as a CSV file.
Then, click OK. On the Migration Projects page, you can start one or more projects for data migration from the MySQL database to the MySQL tenant of OceanBase Database.
The data transmission service allows you to modify the migration objects when a migration project is running. For more information, see View and modify migration objects. After the data migration project is started, it will be executed based on the selected migration types. For more information, see View migration details.