All Products
Search
Document Center

Data Transmission Service:Migrate ApsaraDB RDS for MySQL to Elasticsearch

Last Updated:Feb 13, 2026

This topic describes how to use Data Transmission Service (DTS) to migrate ApsaraDB RDS for MySQL to Alibaba Cloud Elasticsearch.

Prerequisites

  • You have created a target Elasticsearch instance. For more information, see Create an Alibaba Cloud Elasticsearch Instance.

  • To perform a full data migration, the storage space of the destination instance should be greater than that of the source database.

Considerations

Note
  • During schema migration, DTS migrates foreign keys from the source database to the destination database.

  • During full data migration and incremental data migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.

Type

Description

Source database limits

  • The tables to be migrated must have a primary key or UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

  • If the migration object is at the table level and requires editing (such as table column name mapping), and the number of tables in a single migration task exceeds 5,000, split the tables to be migrated and configure multiple tasks in batches, or configure a full database migration task. Otherwise, the request may fail after the task is submitted.

  • If you need incremental migration, enable binary logging:

    • Set binlog_format to ROW and binlog_row_image to FULL. Otherwise, the precheck fails and the task cannot start.

      Important

      If your self-managed MySQL source is a dual-master cluster—where each instance acts as both master and slave—enable the log_slave_updates parameter. This ensures DTS can read all binary logs.

    • For RDS for MySQL instances, retain local binary logs for at least three days (seven days recommended). For self-managed MySQL databases, retain local binary logs for at least seven days. If DTS cannot access binary logs, the task fails. In extreme cases, data inconsistency or data loss may occur. Issues caused by binary log retention periods shorter than DTS requires are not covered under the DTS SLA.

      Note

      To set the retention period for local binary logs on an RDS for MySQL instance, see Automatically delete local logs.

  • Source database operation limits: During the schema migration phase, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task will fail.

  • If you need incremental migration, RDS for MySQL instances that do not record transaction logs—such as RDS for MySQL 5.6 read-only instances—are not supported as sources.

  • DTS does not migrate data generated by changes that do not write to binary logs. Examples include data restored from physical backups or created by cascade operations.

    Note

    If this occurs, re-run full migration after your business allows it.

  • If your source MySQL database is version 8.0.23 or later and contains invisible hidden columns, DTS cannot read those columns. This may cause data loss.

    Note

    Run ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE; to make the hidden column visible. For more information, see Invisible Columns.

Other limits

  • DTS does not support migrating INDEX, PARTITION, VIEW, PROCEDURE, FUNCTION, TRIGGER, or FK.

  • DTS does not support migrating data to indexes in the destination that have a parent-child relationship or Join field type mapping. Otherwise, the task may become abnormal or data queries in the destination may fail.

  • To add columns to tables to be migrated in the source database, you simply need to first modify the corresponding table's mapping in the Elasticsearch instance, then execute the corresponding DDL operation in the source database, and finally pause and restart the migration task.

  • Before performing data migration, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. Otherwise, DTS will occupy some read and write resources of the source and destination databases during full data migration, which may increase the database load.

  • Because full data migration performs concurrent INSERT operations, the tables in the destination database will become fragmented. Therefore, the storage space of tables in the destination database will be larger than that of tables in the source instance after full migration.

  • DTS attempts to resume the data migration task that failed within the last seven days. Therefore, before switching your business to the destination instance, you must terminate or release the task, or revoke the write permissions of the DTS account for the destination instance using the revoke command. Otherwise, after the task is automatically resumed, data in the source instance will overwrite the data in the destination instance.

  • If data migrated from a MySQL instance to an Elasticsearch instance contains empty characters, it will be converted to LONG type data and written to the Elasticsearch instance, causing the task to fail.

  • If data migrated from a MySQL instance to an Elasticsearch instance contains location information, and the latitude and longitude are stored in reverse, an error will occur when writing the data to the Elasticsearch instance.

  • Development and test specifications of Elasticsearch instances are not supported.

  • If your RDS for MySQL instance has Always-Encrypted enabled, full migration is not supported.

    Note

    RDS for MySQL instances with Transparent Data Encryption (TDE) enabled support schema migration, full migration, and incremental migration.

  • If a task fails, DTS support staff will attempt to restore it within eight hours. During restoration, they may restart the task or adjust its parameters.

    Note

    Only DTS task parameters are modified—not database parameters. Parameters that may be adjusted include those listed in Modify instance parameters.

Special cases

  • For self-managed MySQL sources:

    • A master–standby switchover on the source database causes the migration task to fail.

    • DTS calculates latency by comparing the timestamp of the last record migrated to the destination database with the current time. If no DML operations run on the source for a long time, latency reporting becomes inaccurate. If latency appears too high, run a DML operation on the source to update the latency value.

      Note

      If you select full-database migration, create a heartbeat table. Update or write to it every second.

    • DTS periodically runs CREATE DATABASE IF NOT EXISTS `test` on the source database to advance the binary log offset.

    • If your source is Amazon Aurora MySQL or another clustered MySQL instance, ensure the domain name or IP address configured for the task—and its DNS resolution—always points to a read–write (RW) node. Otherwise, the migration task may fail.

  • For RDS for MySQL sources:

    • If you need incremental migration, RDS for MySQL instances that do not record transaction logs—such as RDS for MySQL 5.6 read-only instances—are not supported as sources.

    • DTS periodically runs CREATE DATABASE IF NOT EXISTS `test` on the source database to advance the binary log offset.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

When the Access Method parameter of the destination database is set to Public IP Address, you are charged for Internet traffic. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration Type Description

  • Schema migration

    DTS migrates the schema definitions of the migration objects from the source database to the destination database.

  • Full migration

    DTS migrates all historical data of the specified migration objects from the source database to the destination database.

  • Incremental migration

    After a full migration is complete, DTS migrates incremental data updates from the source database to the destination database. Incremental migration lets you smoothly migrate data without interrupting your self-managed applications.

SQL Operations Supported for Incremental Migration

Operation Type

SQL Operation Statement

DML

INSERT, UPDATE, DELETE

Note

DTS does not support migrating operations that remove fields using UPDATE statements.

Database Account Permissions

Database

Schema Migration

Full Migration

Incremental Migration

Account Creation and Authorization

RDS MySQL instance

SELECT permission

SELECT permission

REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, and SELECT permissions

Create an account

Elasticsearch instance

The database account must have read and write permission, typically `elastic`.

Data type mappings

  • Because source databases and Elasticsearch instances support different data types, data types cannot always be mapped directly. During initial schema synchronization, DTS maps data types based on the types that the destination Elasticsearch instance supports. For more information, see Data type mappings for initial schema synchronization.

    Note

    DTS does not set the mapping parameter in the dynamic during schema migration. The behavior of this parameter depends on your Elasticsearch instance settings. If your source data is in JSON format, ensure that the values for the same key have the same data type across all rows in a table. Otherwise, DTS may report synchronization errors. For more information, see dynamic.

  • The following table describes the mappings between Elasticsearch and relational databases.

    Elasticsearch

    Relational database

    Index

    Database

    Type

    Table

    Document

    Row

    Field

    Column

    Mapping

    Database schema

Procedure

  1. Navigate to the migration task list page for the destination region using one of the following methods.

    From the DTS console

    1. Log on to the Data Transmission Service (DTS) console.

    2. In the navigation pane on the left, click Data Migration.

    3. In the upper-left corner of the page, select the region where the migration instance is located.

    From the DMS console

    Note

    The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the Data Management (DMS) console.

    2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

    3. To the right of Data Migration Tasks, select the region where the migration instance is located.

  2. Click Create Task to navigate to the task configuration page.

  3. Configure the source and destination databases.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source Database

    Select Existing Connection

    • To use a database instance that has been added to the system (created or saved), select the desired database instance from the drop-down list. The database information below will be automatically configured.

      Note

      In the DMS console, this parameter is named Select a DMS database instance..

    • If you have not registered the database instance with the system, or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select MySQL.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region of the source RDS MySQL instance.

    Is it cross-Alibaba Cloud account

    This example involves migration within the same Alibaba Cloud account. Set to Not cross-account.

    RDS Instance ID

    Select the source RDS MySQL instance ID.

    Database Account

    Enter the database account of the source RDS MySQL instance. For permission requirements, see Permissions required for database accounts.

    Database Password

    Enter the password for the specified database account.

    Connection Method

    Select Non-encrypted or SSL-encrypted as needed. If you set this to SSL-encrypted, you must enable SSL encryption for the RDS for MySQL instance beforehand. For more information, see Use a cloud certificate to quickly enable SSL link encryption.

    Destination Database

    Select Existing Connection

    • To use a database instance that has been added to the system (created or saved), select the desired database instance from the drop-down list. The database information below will be automatically configured.

      Note

      In the DMS console, this parameter is named Select a DMS database instance..

    • If you have not registered the database instance with the system, or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select Elasticsearch.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the target Elasticsearch instance is located.

    Type

    Select Cluster Edition or Serverless based on your requirements.

    Instance ID

    Select the target Elasticsearch instance ID.

    Database Account

    Enter the account used to connect to the Elasticsearch instance—namely, the Login Name entered when you created the Elasticsearch instance. The default account is elastic.

    Database Password

    Enter the password for the specified database account.

    Encryption

    Select HTTP or HTTPS as needed.

  4. After you complete the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that the IP address segment of the DTS service is automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For more information, see Add DTS server IP addresses to a whitelist.

    • If the source or destination database is a self-managed database (the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box that appears.

  5. Configure the task objects.

    1. On the Configure Objects page, configure the objects that you want to migrate.

      Configuration

      Description

      Migration Types

      • If you only need to perform a full migration, select both Schema Migration and Full Data Migration.

      • To perform a migration with no downtime, select Schema Migration, Full Data Migration, and Incremental Data Migration.

      Note
      • If you do not select Schema Migration, you must ensure that a database and tables to receive the data exist in the destination database. You can also use the object name mapping feature in the Selected Objects box as needed.

      • If you do not select Incremental Data Migration, do not write new data to the source instance during data migration to ensure data consistency.

      Processing Mode for Existing Destination Tables

      • Precheck and Report Errors: Checks whether tables with the same names exist in the destination database. If no tables with the same names exist, the precheck is passed. If tables with the same names exist, an error is reported during the precheck, and the data migration task does not start.

        Note

        If a table in the destination database has the same name but cannot be easily deleted or renamed, you can change the name of the table in the destination database. For more information, see Object name mapping.

      • Ignore Errors and Proceed: Skips the check for tables with the same names.

        Warning

        Selecting Ignore Errors and Proceed may cause data inconsistency and business risks. For example:

        • If the table schemas are consistent and a record in the destination database has the same primary key value as a record in the source database:

          • During full migration, DTS keeps the record in the destination database. The record from the source database is not migrated.

          • During incremental migration, DTS does not keep the record in the destination database. The record from the source database overwrites the record in the destination database.

        • If the table schemas are inconsistent, only some columns of data may be migrated, or the migration may fail. Proceed with caution.

      Index Name

      • If you select Table Name, the index name created in the target Elasticsearch instance matches the table name.

      • When you select Database Name_Table Name, the index name created in the target Elasticsearch instance is concatenated from the database name, an underscore (_), and the table name in that order.

      Note

      The index name mapping configuration applies to all tables.

      Case Policy for Destination Object Names

      You can configure the case sensitivity policy for the names of migrated objects, such as databases, tables, and columns, in the destination instance. By default, DTS default policy is selected. You can also choose to keep the case sensitivity consistent with the default policy of the source or destination database. For more information, see Case sensitivity of object names in the destination database.

      Source Objects

      In the Source Objects box, click the objects to be migrated, then click 向右小箭头 to move them to the Selected Objects box.

      Selected Objects

      To modify the field names after migration, in the Selected Objects area, right-click the corresponding table name, set the index name, Type name, and other information for the table in the destination Elasticsearch instance, then click OK. For more information, see Single Table Column Mapping.

      Note
      • The only special character supported for index names and Type names is the underscore (_).

      • You can set SQL filter conditions to filter data to be migrated. Only data that meets the filter conditions will be migrated to the destination instance. For more information, see Filter Task Data by SQL Condition.

    2. Click Next: Advanced Settings to configure advanced parameters.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS schedules tasks on a shared cluster. You do not need to select one. If you want more stable tasks, you can purchase a dedicated cluster to run DTS migration tasks.

      Retry Time for Failed Connections

      After the migration task starts, if the connection to the source or destination database fails, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can customize the retry time to a value from 10 to 1440 minutes. We recommend that you set the duration to more than 30 minutes. If DTS reconnects to the source and destination databases within the specified duration, the migration task automatically resumes. Otherwise, the task fails.

      Note
      • For multiple DTS instances that share the same source or destination, the network retry time is determined by the setting of the last created task.

      • Because you are charged for the task during the connection retry period, we recommend that you customize the retry time based on your business needs, or release the DTS instance as soon as possible after the source and destination database instances are released.

      Retry Time for Other Issues

      After the migration task starts, if a non-connectivity issue, such as a DDL or DML execution exception, occurs in the source or destination database, DTS reports an error and immediately begins to retry the operation. The default retry duration is 10 minutes. You can customize the retry time to a value from 1 to 1440 minutes. We recommend that you set the duration to more than 10 minutes. If the related operations succeed within the specified retry duration, the migration task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than the value of Retry Time for Failed Connections.

      Shard Configuration

      Based on the maximum shard configuration of the index in the target Elasticsearch, set the number of primary shards and replica shards for the index.

      String Index

      The method for indexing strings in the target Elasticsearch instance.

      • analyzed: Analyze strings first, then index them. You also need to select a specific analyzer. For information about the type and function of analyzers, see Analyzers.

      • not analyzed: Do not analyze. Directly index the original value.

      • no: Do not index.

      Time Zone

      When DTS migrates time-type data, such as DATETIME and TIMESTAMP, to the target Elasticsearch instance, you can choose the time zone.

      Note

      If such time-type data in the destination instance does not need to include a time zone, you must set the document type for this time-type data in the destination instance in advance.

      DOCID

      The DOCID defaults to the table's primary key. If the table has no primary key, the DOCID will be an Elasticsearch automatically generated ID column.

      Environment Tag

      Select the environment label to identify the instance as needed. This example does not require a selection.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Whether to delete SQL operations on heartbeat tables of forward and reverse tasks

      Choose whether DTS writes heartbeat SQL information to the source database while the instance is running.

      • Yes: Does not write heartbeat SQL information to the source database. The DTS instance may display latency.

      • No: Writes heartbeat SQL information to the source database. This may interfere with source database operations like physical backups and cloning.

      Monitoring and Alerting

      Select whether to set alerts and receive alert notifications based on your business needs.

      • No: Does not set an alert.

      • Yes: Configure alerts by setting an alert threshold and an alert notifications. If a migration fails or the latency exceeds the threshold, the system sends an alert notification.

    3. After you complete the configuration, click Next: Configure Table Fields at the bottom of the page to set the _routing policy and _id value for the tables to be migrated in the destination Elasticsearch.

      Type

      Description

      Is _routing Set?

      Setting _routing lets you route documents to specific shards in the destination Elasticsearch instance. For more information, see _routing.

      • If you select Yes, you can customize columns for routing.

      • If you select No, _id is used for routing.

      Note

      When the target Elasticsearch instance that you create is version 7.x, you must select No.

      _routing Column

      Select the column used for routing.

      Note

      This parameter only needs to be set if Set _routing is set to Yes.

      _id Value

      Select the column used as the document ID.

  6. Save the task and run a precheck.

    • To view the parameters for configuring this instance when you call the API operation, move the pointer over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the bubble that appears.

    • If you do not need to view or have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before the migration task starts, DTS performs a precheck. The task starts only after it passes the precheck.

    • If the precheck fails, click View Details next to the failed check item, fix the issue based on the prompt, and then run the precheck again.

    • If a warning is reported during the precheck:

      • For check items that cannot be ignored, click View Details next to the failed item, fix the issue based on the prompt, and then run the precheck again.

      • For check items that can be ignored, you can click Confirm Alert Details, Ignore, OK, and Precheck Again to skip the alert item and run the precheck again. If you choose to ignore a warning, it may cause issues such as data inconsistency and pose risks to your business.

  7. Purchase the instance.

    1. When the Success Rate reaches 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the link specification for the data migration instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Resource Group Settings

      Select the resource group to which the instance belongs. The default value is default resource group. For more information, see What is Resource Management?

      Instance Class

      DTS provides migration specifications with different performance levels. The link specification affects the migration speed. You can select a specification based on your business scenario. For more information, see Data migration link specifications.

    3. After the configuration is complete, read and select Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start. In the OK dialog box that appears, click OK.

      You can view the progress of the migration task on the Data Migration Tasks list page.

      Note
      • If the migration task does not include incremental migration, it stops automatically after the full migration is complete. After the task stops, its Status changes to Completed.

      • If the migration task includes incremental migration, it does not stop automatically. The incremental migration task continues to run. While the incremental migration task is running, the Status of the task is Running.