All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB-X 1.0 instance to an Elasticsearch cluster

Last Updated:Nov 21, 2024

This topic describes how to migrate data from a PolarDB-X 1.0 instance to an Elasticsearch cluster by using Data Transmission Service (DTS).

Prerequisites

Limits

CategoryDescription
Limits on the source database
  • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
  • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, you can migrate up to 1,000 tables in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches or configure a task to migrate the entire database.
  • If you need to migrate incremental data, make sure that the following requirements are met:
    • The binary logging feature is enabled. The value of the binlog_row_image parameter is set to full. Otherwise, error messages are returned during precheck and the data migration task cannot be started.
    • If you perform only incremental data migration, the binary logs of the source database must be stored for more than 24 hours. If you perform full data migration and incremental data migration, the binary logs of the source database must be stored for at least seven days. Otherwise, Data Transmission Service (DTS) may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data migration is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of binary logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • You cannot migrate data from a read-only PolarDB-X 1.0 instance.
  • Limits on operations to be performed on the source database:
    • During data migration, do not upgrade or downgrade the source instance, migrate frequently-updated tables, change shard keys, or perform DDL operations on source objects. Otherwise, the data migration task fails.
    • During full data migration and incremental data migration, DTS temporarily disables the constraint check and cascade operations on foreign keys at the session level. If you perform the cascade update and delete operations on the source database during data migration, data inconsistency may occur.
    • If you change the network type of the PolarDB-X 1.0 instance during data migration, you must also modify the network connection information of the data migration task.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency may occur between the source and destination databases. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration.
Other limits
  • If you want to add columns to a table in the source database, modify the mappings of the index that corresponds to the table in the Elasticsearch instance. Then, perform DDL operations on the source table, pause the data migration task, and then start the task again.
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of database servers.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads to the destination database, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the failed task is resumed.
Usage notes

DTS updates the `dts_health_check`.`ha_health_check` table in the source database as scheduled to move forward the binary log file position.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFree of charge. Free of charge.
Incremental data migrationCharged. For more information, see Billing overview.

Migration types

  • Schema migration

    DTS migrates the schemas of the selected objects from the source database to the destination database.

  • Full data migration

    DTS migrates the historical data of required objects from the source database to the destination database.

  • Incremental data migration

    After full data migration is complete, DTS migrates incremental data from the source database to the destination database. Incremental data migration allows data to be migrated smoothly without interrupting the services of self-managed applications during data migration.

SQL operations that can be incrementally migrated

Operation typeSQL statement
DMLINSERT, UPDATE, and DELETE

Data type mappings

For more information, see Data type mappings for initial schema synchronization.

Permissions required for database accounts

DatabaseSchema migrationFull data migrationIncremental data migration
PolarDB-X 1.0 instanceSELECT permissionSELECT permissionRead and write permissions on the objects to be migrated
Note For more information about how to grant the permissions to the database account, see Manage accounts.
Elasticsearch clusterThe database account must have the read and write permissions on the destination database. The account usually is elastic.

Procedure

  1. Go to the Data Migration Tasks page.

    1. Log on to the Data Management (DMS) console.

    2. In the top navigation bar, move the pointer over DTS.

    3. Choose DTS (DTS) > Data Migration.

    Note
  2. From the drop-down list on the right side of Data Migration Tasks, select the region in which your data migration instance resides.

    Note

    If you use the new DTS console, you must select the region in which the data migration instance resides in the upper-left corner.

  3. Click Create Task. In the Create Data Synchronization Task wizard, configure the source and destination databases. The following table describes the parameters.
    SectionParameterDescription
    N/ATask Name

    The name of the task. DTS automatically generates a task name. We recommend that you specify an informative name to identify the task. You do not need to specify a unique task name.

    Source DatabaseSelect a DMS database instance

    The database that you want to use. You can choose whether to use an existing database based on your business requirements.

    • If you select an existing database, DTS automatically populates the parameters for the database.

    • If you do not select an existing database, you must configure the following database information.

    Database TypeThe type of the source database. Select PolarDB-X 1.0.
    Access MethodThe access method of the source database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the source PolarDB-X 1.0 instance resides.
    Replicate Data Across Alibaba Cloud AccountsSpecifies whether to migrate data across Alibaba Cloud accounts. In this example, No is selected.
    Instance IDThe ID of the source PolarDB-X 1.0 instance.
    Database AccountThe database account of the source PolarDB-X 1.0 instance. For information about the permissions that are required for the account, see Permissions required for database accounts.
    Database Password

    The password of the database account.

    Destination DatabaseSelect a DMS database instance

    The database that you want to use. You can choose whether to use an existing database based on your business requirements.

    • If you select an existing database, DTS automatically populates the parameters for the database.

    • If you do not select an existing database, you must configure the following database information.

    Database TypeThe type of the destination database. Select Elasticsearch.
    Access MethodThe access method of the destination database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the destination Elasticsearch cluster resides.
    Instance IDThe ID of the destination Elasticsearch cluster.
    Database AccountThe username that is used to connect to the Elasticsearch cluster. This account is the username that you specified when you created the Elasticsearch cluster. The default database account in Elasticsearch clusters is elastic.
    Database Password

    The password of the database account.

  4. In the lower part of the page, click Test Connectivity and Proceed.

    If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the self-managed database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the IP address whitelist of the database to allow DTS to access the database. For more information, see the CIDR blocks of DTS servers section of the Add the CIDR blocks of DTS servers topic.

    Warning

    If the public CIDR blocks of DTS servers are automatically or manually added to the whitelist of a database instance or to the security group rules of an ECS instance, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or security group rules and forbidding unauthorized CIDR blocks, or connecting the database instance to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.

  5. Configure the objects to be synchronized and advanced settings.
    ParameterDescription
    Synchronization Types

    • To perform only full data migration, select Schema Migration and Full Data Migration.

    • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.

    Note

    If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.

    Processing Mode of Conflicting Tables
    • Precheck and Report Errors: checks whether the destination database contains tables that use the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.

      Note

      If the source and destination databases contain tables with identical names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are migrated to the destination database. For more information, see Map object names.

    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to the following potential risks:

      • If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur:

        • During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained.

        • During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.

      • If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.

    Index Name
    • If you select Table Name, the index created in the destination Elasticsearch cluster uses the same name as the table in the source instance.

    • If you select Database Name_Table Name, the index created in the destination Elasticsearch cluster is concatenated in the format of Database name_Table name.

    Note The index name mapping rule takes effect for all tables.
    Capitalization of Object Names in Destination Instance

    The capitalization of database names, table names, and column names in the destination cluster. By default, DTS default policy is selected. You can select other options to ensure that the capitalization of object names is consistent with the default capitalization of object names in the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.

    Source Objects

    Select one or more objects from the Source Objects section. Click the Rightwards arrow icon and add the objects to the Selected Objects section.

    Note The time field supports the TIMESTAMP data type. If a value of the time field is 0 in the source database, the value of the time field is automatically converted to null in the destination database.
    Selected Objects
    • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
    • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
    Note
    • If you use the object name mapping feature to rename an object, other objects that are dependent on the object may fail to be migrated.
    • To specify WHERE conditions to filter data, right-click an object in the Selected Objects section. In the dialog box that appears, specify the conditions. For more information, see Specify filter conditions.
    • To select the SQL operations performed on a specific database or table, right-click an object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to migrate. For more information about the SQL operations that can be migrated, see the SQL operations that can be incrementally migrated section of this topic.
  6. Click Next: Advanced Settings.
    ParameterDescription
    Monitoring and Alerting

    Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts receive notifications. Valid values:

    Retry Time for Failed Connections
    The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data migration task. Otherwise, the data migration task fails.
    Note
    • If you set different retry time ranges for multiple data migration tasks that have the same source or destination database, the shortest retry time range that is set takes precedence.
    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.
    Retry Time for Other Issues

    The retry time range for other issues. For example, if DDL or DML operations fail to be performed after the data migration task is started, DTS immediately retries the operations within the retry time range. Valid values: 1 to 1440. Unit: minutes. Default value: 10. We recommend that you set the parameter to a value greater than 10. If the failed operations are successfully performed within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

    Important

    The value of the Retry Time for Other Issues parameter must be smaller than the value of the Retry Time for Failed Connections parameter.

    Shard ConfigurationThe number of primary shards and replica shards based on the shard configuration of indexes of the destination Elasticsearch cluster.
    String Index
    The method used to compile the strings to the indexes of the destination Elasticsearch cluster.
    • analyzed: The strings are analyzed before indexing. You must select a specific analyzer. For more information about the analyzer types, see Built-in analyzer reference.
    • not analyzed: The strings are indexed with the original values.
    • no: The strings are not indexed.
    Time Zone
    The time zone of the date and time data types such as DATETIME and TIMESTAMP, which you can select during the data migration to the destination Elasticsearch cluster.
    Note If the date and time data types in the destination cluster do not need a time zone, you must specify the document type for the date and time data types.
    DOCID

    The default value of the parameter is the primary key of the table in the Elasticsearch cluster. If the table does not have a primary key, the value of the parameter is the ID column that is automatically generated by Elasticsearch.

    Configure ETL

    Specifies whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

  7. In the lower part of the page, click Next: Configure Database and Table Fields. On the page that appears, set the _routing policy and _id value for the tables that you want to migrate to the destination Elasticsearch cluster.
    ParameterDescription
    Set _routing.Specifies whether to store a document on a specific shard of the destination Elasticsearch cluster. For more information, see _routing.
    • If you select Yes, you can specify custom columns for routing.
    • If you select No, the _id value is used for routing.
    Note If the version of the destination Elasticsearch cluster is 7.x, you must select No.
    _routing ColumnThe column that is used for routing.
    Note This parameter is required only if the Set _routing parameter is set to Yes.
    Value of _idThe column that is used to store the IDs of documents.
  8. In the lower part of the page, click Next: Save Task Settings and Precheck.

    You can move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters to view the parameters to be specified when you call the relevant API operation to configure the DTS task.

    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.

    • If the task fails to pass the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.

    • If an alert is triggered for an item during the precheck:

      • If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.

      • If the alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.

  9. Wait until Success Rate becomes 100%. Then, click Next: Purchase Instance.

  10. On the Purchase Instance page, configure the Instance Class parameter for the data migration instance. The following table describes the parameters.

    Section

    Parameter

    Description

    New Instance Class

    Resource Group

    The resource group to which the data migration instance belongs. Default value: default resource group. For more information, see What is Resource Management?

    Instance Class

    DTS provides instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Instance classes of data migration instances.

  11. Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

  12. Click Buy and Start. In the message that appears, click OK.

    You can view the progress of the task on the Data Migration page.