Alibaba Cloud Elasticsearch is compatible with open source Elasticsearch features such as Security, Machine Learning, Graph, and Application Performance Management (APM). Alibaba Cloud Elasticsearch provides capabilities such as enterprise-level access control, security monitoring and alerts, and automatic report generation. You can use Alibaba Cloud Elasticsearch to search and analyze data. This topic describes how to synchronize data from a PolarDB for MySQL cluster to an Elasticsearch cluster by using Data Transmission Service (DTS).
Prerequisites
An Elasticsearch cluster of version 5.5, 5.6, 6.3, 6.7, or 7.x is created. For more information, see Create an Elasticsearch cluster.
The binary logging feature is enabled for the PolarDB for MySQL cluster. For more information, see Enable binary logging.
Precautions
DTS uses read and write resources of the source and destination RDS instances during initial full data synchronization. This may increase the loads of the RDS instances. If the instance performance is unfavorable, the specification is low, or the data volume is large, database services may become unavailable. For example, DTS occupies a large amount of read and write resources in the following cases: a large number of slow SQL queries are performed on the source RDS instance, the tables have no primary keys, or a deadlock occurs in the destination RDS instance. Before data synchronization, evaluate the impact of data synchronization on the performance of the source and destination RDS instances. We recommend that you synchronize data during off-peak hours. For example, you can synchronize data when the CPU utilization of the source and destination RDS instances is less than 30%.
DTS does not synchronize DDL operations. If a DDL operation is performed on a table in the source database during data synchronization, you must perform the following operations: Remove the table from the objects to be synchronized, delete the index for the table from the Elasticsearch cluster, and then add the table to the objects to be synchronized. For more information, see Remove an object from a data synchronization task and Add an object to a data synchronization task.
To add columns to the table that you want to synchronize, perform the following steps: Modify the mapping of the table in the Elasticsearch cluster, perform DDL operations in the PolarDB for MySQL cluster, and then pause and start the data synchronization task.
SQL operations that can be synchronized
INSERT, DELETE, and UPDATE
Data type mappings
The data types of the PolarDB for MySQL cluster and the Elasticsearch cluster do not have one-to-one correspondence. During initial schema synchronization, DTS converts the data types of the PolarDB for MySQL cluster into those of the Elasticsearch cluster. For more information, see Data type mappings for schema synchronization.
Procedure
Purchase a data synchronization instance. For more information, see Purchase a DTS instance.
NoteOn the buy page, set Source Instance to PolarDB, Destination Instance to Elasticsearch, and Synchronization Topology to One-Way Synchronization.
Log on to the DTS console.
Note If you are redirected to the Data Management (DMS) console, you can click the icon in the lower-right corner to go to the previous version of the DTS console.In the left-side navigation pane, click Data Synchronization.
In the upper part of the Data Synchronization Tasks page, select the region in which the destination instance resides.
Find the data synchronization instance and click Configure Task in the Actions column.
Configure the source and destination instances.
Section
Parameter
Description
N/A
Synchronization Task Name
The task name that DTS automatically generates. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to use a unique task name.
Source Instance Details
Instance Type
The value of this parameter is set to PolarDB Instance and cannot be changed.
Instance Region
The source region that you selected on the buy page. The value of this parameter cannot be changed.
PolarDB Instance ID
The ID of the source PolarDB for MySQL cluster.
Database Account
The database account of the PolarDB for MySQL cluster.
NoteThe account must have read permissions on the source database.
Database Password
The password of the database account.
Destination Instance Details
Instance Type
This parameter is set to Elasticsearch and cannot be changed.
Instance Region
The destination region that you selected on the buy page. The value of this parameter cannot be changed.
Elasticsearch
The ID of the destination Elasticsearch cluster.
Database Account
The account that is used to connect to the Elasticsearch cluster. The default account is elastic.
Database Password
The password of the database account.
- In the lower-right corner of the page, click Set Whitelist and Next. Note
- You do not need to modify the security settings for ApsaraDB instances (such as ApsaraDB RDS for MySQL and ApsaraDB for MongoDB) and ECS-hosted databases. DTS automatically adds the CIDR blocks of DTS servers to the whitelists of ApsaraDB instances or the security group rules of Elastic Compute Service (ECS) instances. For more information, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
- After data synchronization is complete, we recommend that you remove the CIDR blocks of DTS servers from the whitelists or security groups.
Configure the index name, the processing mode of identical index names, and the objects to be synchronized.
Parameter or setting
Description
Index Name
Table Name
If you select Table Name, the name of the index that is created in the Elasticsearch cluster is the same as the name of the table. In this example, the index name is customer.
DatabaseName_TableName
If you select DatabaseName_TableName, the name of the index that is created in the Elasticsearch cluster is <Database name>_<Table name>. In this example, the index name is dtstestdata_customer.
Processing Mode In Existed Target Table
Pre-check and Intercept: checks whether the destination cluster contains indexes that have the same names as the source tables. If the destination cluster does not contain indexes that have the same names as the source tables, the precheck is passed. Otherwise, an error is returned during the precheck and the data synchronization task cannot be started.
NoteIf indexes in the destination cluster have the same names as the source tables and cannot be deleted or renamed, you can use the object name mapping feature. For more information, see Rename an object to be synchronized.
Ignore: skips the precheck for indexes in the destination cluster that have the same names as the source tables.
WarningIf you select Ignore, data inconsistency may occur and your business may be exposed to potential risks.
If the source database and destination cluster have the same mappings and the primary key of a record in the destination cluster is the same as that in the source database, the record remains unchanged during initial data synchronization. However, the record is overwritten during incremental data synchronization.
If the source database and destination cluster have different mappings, initial data synchronization may fail. In this case, only some columns are synchronized or the data synchronization task fails.
Select the objects to be synchronized
Select one or more objects from the Available section and click the icon to add the objects to the Selected section.
You can select tables or databases as the objects to be synchronized.
Rename Databases and Tables
You can use the object name mapping feature to rename the objects that are synchronized to the destination instance. For more information, see Object name mapping.
Replicate Temporary Tables When DMS Performs DDL Operations
If you use Data Management (DMS) to perform online DDL operations on the source database, you can specify whether to synchronize temporary tables generated by online DDL operations.
Yes: DTS synchronizes the data of temporary tables generated by online DDL operations.
NoteIf online DDL operations generate a large amount of data, the data synchronization task may be delayed.
No: DTS does not synchronize the data of temporary tables generated by online DDL operations. Only the original DDL data of the source database is synchronized.
NoteIf you select No, the tables in the destination database may be locked.
Retry Time for Failed Connections
By default, if DTS fails to connect to the source or destination database, DTS retries within the next 720 minutes (12 hours). You can specify the retry time based on your needs. If DTS reconnects to the source and destination databases within the specified time, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.
NoteWhen DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time based on your business needs. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.
In the Selected section, move the pointer over a table and select Edit. In the Edit Table dialog box, configure parameters for the table in the Elasticsearch cluster, such as the index name and type name.
Parameter or setting
Description
Index Name
For more information, see Terms.
WarningAn index name or a type name can contain only underscores (_) as special characters.
To synchronize multiple source tables with the same schema to a destination object, you must repeat this step to set the same index name and type name for the tables. Otherwise, the data synchronization task fails or data loss occurs.
Type Name
Filter
The SQL conditions that you specify to filter data. Only the data records that meet the specified conditions are synchronized to the destination cluster. For more information, see Set filter conditions.
IsPartition
Specifies whether to configure partitions. If you select Yes, you must also specify the partition key column and number of partitions.
Settings_routing
Specifies whether to store a document on a specific shard of the destination Elasticsearch cluster. For more information, see _routing.
If you select Yes, you can specify custom columns for routing.
If you select No, the _id value is used for routing.
NoteIf the version of the destination Elasticsearch cluster is 7.4, you must select No.
_id value
Primary key column
Multiple columns are merged into one composite primary key.
Business key
If you select a business key, you must also specify the business key column.
add param
You can click add param to add a row. In each row, specify the column parameter and parameter value. For more information, see Mapping parameters in the Elasticsearch documentation.
NoteDTS supports only the parameters that are displayed in the column param drop-down list.
In the lower-right corner of the page, click Precheck.
NoteBefore you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.
If the task fails to pass the precheck, you can click the icon next to each failed item to view details.
After you troubleshoot the issues based on the details, initiate a new precheck.
If you do not need to troubleshoot the issues, ignore the failed items and initiate a new precheck.
Close the Precheck dialog box after the following message is displayed: Precheck Passed. Then, the data synchronization task starts.
Wait until initial synchronization is complete and the data synchronization task enters the Synchronizing state.
You can view the status of the data synchronization task on the Synchronization Tasks page.
Check the index and data
If the data synchronization task is in the Synchronizing state, you can connect to the Elasticsearch cluster by using the Elasticsearch-Head plug-in. Then, you can check whether the index is created and data is synchronized as expected. For more information, see Use Cerebro to access an Elasticsearch cluster.
If the index is not created or data is not synchronized as expected, you can delete the index and data, and then configure the data synchronization task again.