All Products
Search
Document Center

ApsaraDB for ClickHouse:Migrate data between ApsaraDB for ClickHouse Community-compatible edition clusters

Last Updated:Mar 27, 2025

When you plan to switch the version of an ApsaraDB for ClickHouse Community-compatible Edition cluster, you can use the instance migration feature in the ApsaraDB for ClickHouse console to migrate data. This feature supports full data migration and incremental data migration to ensure the integrity of your data.

Prerequisites

  • Both the source and destination clusters must meet the following requirements:

    • The clusters are ApsaraDB for ClickHouse Community-compatible Edition clusters.

      Note

      If you want to migrate data from a Community Edition cluster to an Enterprise Edition cluster, or from an Enterprise Edition cluster to a Community Edition cluster, see Migrate data from an ApsaraDB for ClickHouse Community Edition cluster to an Enterprise Edition cluster.

    • The clusters are in the Running state.

    • The usernames and passwords of database accounts are created for the clusters.

    • If you enable tiered storage of hot data and cold data for the source cluster, you must also enable tiered storage of hot data and cold data for the destination cluster. If you disable tiered storage of hot data and cold data for the source cluster, you must also disable tiered storage of hot data and cold data for the destination cluster.

    • The clusters are deployed in the same region and use the same virtual private cloud (VPC). The IP address of the source cluster is added to the whitelist of the destination cluster. The IP address of the destination cluster is also added to the whitelist of the source cluster. Otherwise, resolve the network issue first. For more information, see How to resolve network connectivity issues between the destination cluster and the data source.

      Note

      You can run the SELECT * FROM system.clusters; command to view the IP address of an ApsaraDB for ClickHouse instance. For information about how to configure a whitelist, see Configure a whitelist.

  • The destination cluster must meet the following additional requirements:

    • The version of the destination cluster is higher than or equal to the version of the source cluster. For information about the latest version, see Community-compatible Edition.

    • The unused disk storage space (excluding cold storage) of the destination cluster is greater than or equal to 1.2 times the used disk storage space (excluding cold storage) of the source cluster.

  • Each local table in the source cluster corresponds to a unique distributed table.

Considerations

  • Migration speed: Under normal circumstances, when you migrate data by using the console, the migration speed of a single node in the destination cluster is greater than 20 MB/s. If the data write speed of a single node in the source cluster is also greater than 20 MB/s, you need to evaluate whether the migration speed of the destination cluster can keep up with the write speed of the source cluster. If the migration speed of the destination cluster cannot keep up with the write speed of the source cluster, the migration may never complete.

  • The destination cluster will stop merging data parts during the migration. The source cluster will not stop merging data parts during the migration.

  • Migration content:

    • You can migrate the following objects from the source cluster: the cluster, databases, tables, data dictionaries, materialized views, user permissions, and cluster configurations.

    • You cannot migrate Kafka or RabbitMQ tables.

    • Important

      To ensure that Kafka and RabbitMQ data is not sharded, you must delete the Kafka and RabbitMQ tables in the source cluster and then create corresponding tables in the destination cluster or use different consumer groups.

      For non-MergeTree tables such as external tables and Log tables, only table structures can be migrated.

      Note

      After data migration, non-MergeTree tables in the destination cluster have only table structures but no business data. You can use the remote function to migrate business data. For more information, see Migrate data by using the remote function.

  • Data volume for migration:

    • Cold data: The migration speed of cold data is relatively slow. We recommend that you clean up cold data in the self-managed cluster to ensure that the total volume does not exceed 1 TB. Otherwise, if the migration takes too long, the migration may fail.

    • Hot data: If the volume of hot data exceeds 10 TB, the failure rate of migration tasks is relatively high. We do not recommend that you use this solution for migration.

  • If your data does not meet the preceding conditions, you can choose manual migration.

Impact on clusters

  • Source cluster: During migration, you can read data from and write data to tables in the source cluster, but you cannot perform DDL operations (operations to add, delete, or modify metadata of databases or tables).

    Important
    • To ensure that the migration task is completed as expected, when the estimated remaining time for data migration on the console is less than or equal to 10 minutes, the source cluster automatically suspends data writes during the preset time window for data write suspension.

    • The source cluster automatically resumes data writes when all data is migrated within the preset time range for data write suspension or when data migration is not completed after the preset time range for data write suspension elapses.

  • Destination cluster: After the migration is complete, the destination cluster continues to perform high-frequency merge operations for a period of time. This causes the I/O utilization to increase, which in turn increases the latency of business requests. We recommend that you plan in advance to address the potential impact of business request latency. You need to calculate the time required for merge operations. For information about how to calculate the time, see Calculate the time required for merge operations after migration.

Procedure

Important

The following operations are performed on the destination cluster, not on the source cluster.

Step 1: Create a migration task

  1. Log on to the ApsaraDB for ClickHouse console.

  2. On the Clusters page, select Community Edition Instance List and click the ID of the destination cluster.

  3. In the left-side navigation pane, click Data Migration And Synchronization > Apsaradb For Clickhouse.

  4. On the Instance Migration page, click Create Migration Task.

    1. Configure the source and destination clusters.

      Configure the following information and click Test Connection And Proceed.

      Note

      After the connection test succeeds, proceed to the Migration Content step. If the connection test fails, configure the source and destination clusters again as prompted.

      image

    2. Confirm the migration content.

      Read the information about the content to be migrated and click Next: Precheck And Start Synchronization.

    3. The system performs prechecks on the migration configuration and then starts the migration task in the backend.

      The system performs Instance Status Check, Storage Space Check, and Local Table And Distributed Table Check on the source and destination clusters.

      • If the prechecks pass, perform the following operations:

        1. Read the information about the impacts of data migration on clusters.

        2. Set Data Write Suspension Time.

          Note
          • The source cluster must stop writing data during the last 10 minutes of migration to ensure data consistency.

          • To ensure the success rate of data migration, we recommend that you specify a value greater than or equal to 30 minutes.

          • The migration task must be completed within five days after the task is started (when the task is created). Therefore, the end date of Data Write Suspension Time for the source cluster must be less than or equal to Current date + 5.

          • To reduce the impact of data migration on your business, we recommend that you configure a time range during off-peak hours.

        3. Click Completed.

          Note

          After you click Completed, the task is created and started.

      • If the prechecks fail, you need to follow the on-screen instructions to resolve the issue and then configure the migration task parameters again. The following table describes the precheck items and requirements.

        Check item

        Requirement

        Instance Status Check

        Before you migrate data, make sure that no management operations, such as scale-out, upgrade, or downgrade operations, are being performed on the source cluster and the destination cluster. If management operations are being performed on the source cluster and the destination cluster, the system cannot start a migration task.

        Storage Space Check

        Before a migration task is started, the system checks the storage space of the source cluster and the destination cluster. Make sure that the storage space of the destination cluster is greater than or equal to 1.2 times the storage space of the source cluster.

        Local Table And Distributed Table Check

        If no distributed table is created for a local table or multiple distributed tables are created for the same local table of the source cluster, the precheck fails. You must delete redundant distributed tables or create a unique distributed table.

Step 2: Evaluate whether the migration can be completed

If the write speed of the source cluster is less than 20 MB/s, you can skip this step.

If the write speed of the source cluster is greater than 20 MB/s, the theoretical write speed of a single node in the destination cluster is also greater than 20 MB/s. To ensure that the write speed of the destination cluster can keep up with the write speed of the source cluster and the migration can be completed, you need to check the actual write speed of the destination cluster to evaluate the feasibility of the migration. You must perform the following steps:

  1. View Disk Throughput of the destination cluster to determine the actual write speed of the destination cluster. For information about how to view Disk Throughput, see View cluster monitoring information.

  2. Determine the relationship between the write speeds of the destination cluster and the source cluster.

    1. If the write speed of the destination cluster is greater than the write speed of the source cluster, the migration is more likely to succeed. Proceed to Step 6.

    2. If the write speed of the destination cluster is less than the write speed of the source cluster, the migration is more likely to fail. We recommend that you cancel the migration task and use manual migration to migrate data.

Step 3: View the migration task

  1. On the Clusters page, select Community Edition Instance List and click the ID of the destination cluster.

  2. In the left-side navigation pane, click Instance Migration.

    On the Instance Migration page, view Migration Status, Running Stage Information, and Data Write Suspension Window of the migration task.

    Note

    When the estimated remaining time for data migration in the Running Stage Information column is less than or equal to 10 minutes and the migration status is Migrating, the system triggers data write suspension for the source cluster to ensure data consistency. The following section describes the rules for data write suspension:

    • If the trigger time is within the preset time range for data write suspension of the source cluster, the source cluster suspends data writes.

    • If the trigger time is not within the preset time range for data write suspension of the source cluster and is less than or equal to Task start (creation) date + 5, you can modify the time window for data write suspension to continue the migration task.

    • If the trigger time is not within the preset time range for data write suspension of the source cluster and is greater than Task start (creation) date + 5, the migration fails. You must cancel the migration task, clear the migrated data in the destination cluster, and recreate a migration task to migrate data.

Step 4: (Optional) Cancel the migration task

  1. On the Clusters page, select Community Edition Instance List and click the ID of the destination cluster.

  2. In the left-side navigation pane, click Instance Migration.

  3. In the Actions column of the target migration task, click Cancel Migration.

  4. In the Cancel Migration dialog box, click OK.

    Note
    • After the migration task is canceled, the task state is not updated immediately. We recommend that you refresh the page at intervals to view the task state.

    • After the task is canceled, the Migration Status of the task changes to Completed.

    • Before you restart a migration task, you must clear the migrated data in the destination cluster to avoid data duplication.

Step 5: (Optional) Modify the time window for data write suspension

  1. On the Clusters page, select Community Edition Instance List and click the ID of the destination cluster.

  2. In the left-side navigation pane, click Instance Migration.

  3. In the Actions column of the target migration task, click Modify Data Write Suspension Window.

  4. In the Modify Data Write Suspension Window dialog box, select Data Write Suspension Time.

    Note

    The rules for setting Data Write Suspension Time are the same as those for setting Data Write Suspension Time when you create a migration task.

  5. Click OK.

References

For information about how to migrate data from a self-managed ClickHouse cluster to ApsaraDB for ClickHouse, see Migrate data from a self-managed ClickHouse cluster to ApsaraDB for ClickHouse Community-compatible Edition.