All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB-X 2.0 instance to an Elasticsearch cluster

更新時間:Jul 24, 2023

This topic describes how to migrate data from a PolarDB-X instance to an Elasticsearch cluster by using Data Transmission Service (DTS).

Prerequisites

  • The source PolarDB-X instance that is compatible with MySQL 5.7 is created.
  • The destination Elasticsearch cluster is created. For more information, see Create an Alibaba Cloud Elasticsearch cluster.
  • The engine versions of the source instance and the destination cluster are supported. For more information, see Overview of data migration scenarios.
  • The available storage space of the destination Elasticsearch cluster is larger than the total size of the data in the source PolarDB-X instance.

Limits

CategoryDescription
Limits on the source database
  • Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.
  • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
  • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, you can migrate up to 1,000 tables in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches or configure a task to migrate the entire database.
  • If you need to migrate incremental data, make sure that the following requirements are met:
    • The binary logging feature is enabled. The value of the binlog_row_image parameter is set to full. Otherwise, error messages are returned during precheck and the data migration task cannot be started.
    • If you perform only incremental data migration, the binary logs of the source database must be stored for more than 24 hours. If you perform full data migration and incremental data migration, the binary logs of the source database must be stored for at least seven days. Otherwise, Data Transmission Service (DTS) may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data migration is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of binary logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Limits on operations to be performed on the source database:
    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • If you want to change the network type of the PolarDB-X instance during data migration, you must also modify the network connection settings of the data migration task.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency may occur between the source and destination databases. To ensure data consistency, we recommend that you select Schema Migration, Full Data Migration, and Incremental Data Migration as the migration types.
  • The PolarDB-X instance must be compatible with MySQL 5.7.
Other limits
  • If you want to add columns to a table in the source database, modify the mappings of the index that corresponds to the table in the Elasticsearch cluster. Then, perform DDL operations on the source table, pause the data migration task, and then start the task again.
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.
  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is complete, the size of used tablespace of the destination database is larger than that of the source database.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads to the destination database, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after a failed task is resumed.
PrecautionsDTS updates the `dts_health_check`.`ha_health_check` table in the source database as scheduled to move forward the binary log file position.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFree of charge. Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Billing overview.
Incremental data migrationCharged. For more information, see Billing overview.

Migration types

  • Schema migration

    DTS migrates the schemas of objects from the source database to the destination database.

  • Full data migration

    DTS migrates the existing data of objects from the source database to the destination database.

  • Incremental data migration

    After full data migration is complete, DTS migrates incremental data from the source database to the destination database. Incremental data migration allows data to be migrated smoothly without interrupting services of self-managed applications during data migration.

SQL operations that can be migrated during incremental data migration

Operation typeSQL statement
DMLINSERT, UPDATE, and DELETE

Mappings

The MySQL data types supported by PolarDB-X do not exactly match the data types supported by Elasticsearch. DTS converts the data types of the source database to those of the Elasticsearch cluster during schema migration based on the data type mappings between heterogeneous databases. For more information, see Data type mappings between heterogeneous databases.

Permissions required for database accounts

DatabaseSchema migrationFull data migrationIncremental data migration
Source PolarDB-X instanceSELECT permissionSELECT permissionREPLICATION SLAVE and REPLICATION CLIENT permissions, and SELECT permission on the objects to be migrated
Destination Elasticsearch clusterRead and write permissions on the objects to be migrated. The default database account in Elasticsearch clusters is elastic.

Procedure

  1. Go to the Data Migration Tasks page.
    1. Log on to the Data Management (DMS) console.
    2. In the top navigation bar, click DTS.
    3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.
    Note
  2. From the drop-down list next to Data Migration Tasks, select the region in which the data migration instance resides.
    Note If you use the new DTS console, you must select the region in which the data migration instance resides in the upper-left corner.
  3. Click Create Task. On the page that appears, configure the source and destination databases.
    Warning After you configure the source and destination databases, we recommend that you read the limits displayed at the top of the page. Otherwise, the task may fail or data inconsistency may occur.
    SectionParameterDescription
    N/ATask Name

    The task name that DTS automatically generates. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source DatabaseSelect an existing DMS database instance
    The database instance that you want to use. You can choose whether to select an existing instance based on your business requirements.
    • If you select an existing instance, DTS automatically populates the parameters for the database.
    • If you do not select an existing instance, you must manually configure parameters for the database.
    Database TypeThe type of the source database. Select PolarDB-X 2.0.
    Access MethodThe access method of the source database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the source PolarDB-X instance resides.
    Instance IDThe ID of the source PolarDB-X instance.
    Database AccountThe database account of the source PolarDB-X instance. For more information about the permissions that are required for the account, see the Permissions required for database accounts section of this topic.
    Database Password

    The password of the database account.

    Destination DatabaseSelect an existing DMS database instance
    The database instance that you want to use. You can choose whether to select an existing instance based on your business requirements.
    • If you select an existing instance, DTS automatically populates the parameters for the database.
    • If you do not select an existing instance, you must manually configure parameters for the database.
    Database TypeThe type of the destination database. Select Elasticsearch.
    Access MethodThe access method of the destination database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the destination Elasticsearch cluster resides.
    Instance IDThe ID of the destination Elasticsearch cluster.
    Database AccountThe database account of the destination Elasticsearch cluster. For more information about the permissions that are required for the account, see the Permissions required for database accounts section of this topic.
    Database Password

    The password of the database account.

  4. In the lower part of the page, click Test Connectivity and Proceed.
    If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the IP address whitelist of the database to allow DTS to access the database. For more information, see the "CIDR blocks of DTS servers" section of the Add the CIDR blocks of DTS servers to the security settings of on-premises databases topic.
    Warning If the CIDR blocks of DTS servers are automatically or manually added to the IP address whitelist of the database instance or ECS security group rules, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhance the security of your account and password, limit the ports that are exposed, authenticate API calls, regularly check the IP address whitelist or ECS security group rules and forbid unauthorized CIDR blocks, and connect the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.
  5. Configure the objects to be migrated and advanced settings.
    ParameterDescription
    Migration Type
    • To perform only full data migration, select Schema Migration and Full Data Migration.
    • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.
    Note If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.
    Processing Mode of Conflicting Tables
    • Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain tables that have the same names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.

      Note You can use the object name mapping feature to rename the tables that are migrated to the destination database. You can use this feature if the source and destination databases contain tables that have identical names and the tables in the destination database cannot be deleted or renamed. For more information, see Map object names.
    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
      Warning If you select Ignore Errors and Proceed, data inconsistency may occur, and your business may be exposed to potential risks.
      • If the source and destination databases have the same schema, DTS does not migrate data records that have the same primary keys as data records in the destination database.
      • If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.
    Index Name
    • Table Name

      If you select Table Name, the created index name in the destination Elasticsearch cluster is the same as the table name. In this example, order is used.

    • Database Name_Table Name

      If you select Database Name_Table Name, the created index name in the destination Elasticsearch cluster is in the format of Database name_Table name. In this example, dtstest_order is used.

    Source Objects

    Select one or more objects from the Source Objects section. Click the Rightwards arrow icon and add the objects to the Selected Objects section.

    Note You can select columns, tables, or schemas as the objects to be migrated. If you select tables or columns as the objects to be migrated, DTS does not migrate other objects, such as views, triggers, or stored procedures, to the destination database.
    Selected Objects
    • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
    • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
    Note
    • If you use the object name mapping feature to rename an object, other objects that are dependent on the object may fail to be migrated.
    • To specify WHERE conditions to filter data, right-click an object in the Selected Objects section. In the dialog box that appears, specify the conditions. For more information, see Use SQL conditions to filter data.
    • To select the SQL operations performed on a specific database or table, right-click an object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to migrate. For more information about the SQL operations that can be migrated, see SQL operations that can be migrated during incremental data migration.
  6. Click Next: Advanced Settings to configure advanced settings.
    ParameterDescription
    Set Alerts
    Specifies whether to set alerts for the data migration task. If the task fails or the migration latency exceeds the threshold, the alert contacts will receive notifications. Valid values:
    Retry Time for Failed Connections
    The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data migration task. Otherwise, the data migration task fails.
    Note
    • If you set different retry time ranges for multiple data migration tasks that have the same source or destination database, the shortest retry time range that is set takes precedence.
    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.
    Shard ConfigurationThe number of primary shards and replica shards based on the shard configuration of indexes of the destination Elasticsearch cluster.
    String IndexThe method used to compile the strings to the indexes of the destination Elasticsearch cluster.
    • analyzed: The strings are analyzed before indexing. You must select a specific analyzer. For more information about the analyzer types, see Built-in analyzer reference.
    • not analyzed: The strings are indexed with the original values.
    • no: The strings are not indexed.
    Time ZoneThe time zone of the date and time data types such as DATETIME and TIMESTAMP. You can select a time zone during the data migration to the destination Elasticsearch cluster.
    Note If the date and time data types in the destination cluster do not need a time zone, you must specify the document type for the date and time data types.
    DOCIDThe default value of the parameter is the primary key of the table in the Elasticsearch cluster. If the table does not have a primary key, the value of the parameter is the ID column that is automatically generated by Elasticsearch.
    Configure ETL
    Specifies whether to configure the extract, transform, and load (ETL) feature. For more information, see What is ETL?. Valid values:
    Whether to delete SQL operations on heartbeat tables of forward and reverse tasks
    Specifies whether to write SQL operations on heartbeat tables to the source database while the DTS instance is running.
    • Yes: does not write SQL operations on heartbeat tables. In this case, a latency of the DTS instance may be displayed.
    • No: writes SQL operations on heartbeat tables. In this case, specific features such as physical backup and cloning of the source database may be affected.
  7. In the lower part of the page, click Next: Configure Database and Table Fields. On the page that appears, set the _routing policy and _id value for the tables that you want to migrate to the destination Elasticsearch cluster.
    ItemDescription
    Set _routingSpecifies whether to store a document on a specific shard of the destination Elasticsearch cluster. For more information, see _routing.
    • If you select Yes, you can specify custom columns for routing.
    • If you select No, the _id value is used for routing.
    Note If the version of the destination Elasticsearch cluster is 7.x, you must select No.
    Value of _id
    • Primary key column

      Multiple columns are merged into one composite primary key.

    • Business key

      If you select a business key, you must also specify the business key column.

  8. In the lower part of the page, click Next: Save Task Settings and Precheck.

    You can move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters to view the parameters to be specified when you call the relevant API operation to configure the DTS task.

    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.
    • If the task fails to pass the precheck, click View Details next to each failed item. After you troubleshoot the issues based on the causes, run a precheck again.
    • If an alert is triggered for an item during the precheck:
      • If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.
      • If an alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.
  9. Wait until the Success Rate value becomes 100%. Then, click Next: Purchase Instance.
  10. On the Purchase Instance page, specify the Instance Class parameter for the data migration instance. The following table describes the parameter.
    SectionParameterDescription
    New Instance ClassResource GroupThe resource group to which the data migration instance belongs. Default value: default resource group. For more information, see What is Resource Management?.
    Instance Class

    DTS provides various instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Specifications of data migration instances.

  11. Read and select the check box to agree to Data Transmission Service (Pay-as-you-go) Service Terms.
  12. Click Buy and Start to start the data migration task. You can view the progress of the task in the task list.

Check the index and data

After the state of the data migration task changes to Running, you can use the data visualization tool Kibana to connect to the Elasticsearch cluster. This way, you can check whether the index is created and data is migrated as expected. For more information about how to log on to the Kibana console, see Log on to the Kibana console.

Note If the index is not created or data is not migrated as expected, you can delete the index and data, and then configure the data migration task again.