Full restoration restores all historical data from a PolarDB cluster to a new cluster. After you verify the data in the new cluster, you can migrate the data to the source cluster. You can perform a full restoration from a backup set or to a specific point in time. This topic describes how to restore all historical data from a backup set.
Note
The restored cluster contains the data and account information from the source cluster, but its parameter settings are not included.
Procedure
Log on to the PolarDB console. In the navigation pane on the left, click Clusters. Select the Region where the cluster is deployed and click the cluster ID.
In the navigation pane on the left, choose .
Restore data to a new cluster.
To restore data within the same region:
Find the target backup set and click Restore Data to New Cluster in the Actions column.
To restore data across regions:
On the Backup and Restoration page, select the region where the target backup set is located.
Find the target backup set and click Restore Data to New Cluster in the Actions column.
On the clone instance page, select a Product Type for the new cluster.
Subscription: You must pay for the computing resources of the cluster in advance.
Pay-as-you-go: You are charged for the computing resources that you use. You do not need to pay in advance.
Serverless: Fees include compute node fees, storage capacity fees, backup storage fees that are charged only if the free quota is exceeded, and optional SQL Explorer fees. For more information about billing, see Serverless billing.
Set the following parameters.
Parameter
Description
Clone Type
Select Restore Data from Backup Set.
Region
Select the destination region.
NoteIf cross-region backup is enabled, you can restore data to the source region and the destination region.
If cross-region backup is disabled, the destination region is the same as the source region by default. You do not need to select a region.
Backup Set
Select the backup set that you want to restore.
NoteThe Backup Start Time of each backup set is displayed. You can use this time to identify the target backup set.
Primary Zone
Select the primary zone for the cluster.
NoteIn regions that have two or more zones, PolarDB automatically replicates data to a secondary zone for disaster recovery.
Minimum Number of Read-only Nodes for Scaling
Minimum Read-only Nodes: Set the minimum number of read-only nodes that can be added. Valid values: 0 to 15.
Maximum Read-only Nodes: Set the maximum number of read-only nodes that can be added. Valid values: 0 to 15.
NoteThe number of read-only nodes automatically increases or decreases within the specified range based on the workload. For more information about the scaling policy, see Auto scaling.
To ensure high availability for the serverless cluster, set Minimum Read-only Nodes to 1.
Minimum PCUs Per Node: Set the minimum number of PCUs per node in the cluster. Valid values: 0.25 PCU to 31 PCU.
Maximum PCUs Per Node: Set the maximum number of PCUs per node in the cluster. Valid values: 1 PCU to 32 PCU.
NoteServerless uses PCUs for second-level billing and resource scaling. One PCU is equal to the service capability of about 1 core and 2 GB of memory. The PCUs of a node are dynamically adjusted within the specified range based on the workload. The minimum scaling unit is 0.5 PCU.
Example: If you set Minimum PCUs Per Node to 2 PCU and Maximum PCUs Per Node to 16 PCU, the default specification for nodes in the serverless cluster is 2 PCU (about 2 cores and 4 GB of memory). When the system detects an increased workload, it automatically increases the number of PCUs for the primary or read-only nodes. Based on the settings, the number of PCUs can be increased to a maximum of only 16 PCU (about 16 cores and 32 GB of memory).
NoteThese parameters are available only when you set Product Type to Serverless.
Maximum Number of Read-only Nodes for Scaling
Lower Limit for Single-Node Scaling
Upper limit for single-node scaling
Network Type
The value is fixed as VPC. You do not need to select a value.
VPC Network
Select the VPC and vSwitch for the cluster. We recommend that you use the same VPC and vSwitch as the source cluster.
NoteMake sure that the PolarDB cluster and the ECS instance that you want to connect to are in the same VPC. Otherwise, they cannot communicate over the internal network to achieve optimal performance.
vSwitch
Compatibility
This parameter inherits the setting from the source cluster and cannot be changed.
For example, if the source cluster is compatible with MySQL 8.0, this parameter is also set to MySQL 8.0.
Minor Version
Select 8.0.1 or 8.0.2.
NoteThis parameter is available only when you set Compatibility to MySQL 8.0.
Edition
This parameter inherits the setting from the source cluster and cannot be changed.
For example, if the source cluster is of Cluster Edition, this parameter is also set to Cluster Edition. For more information, see Enterprise Edition series.
Database Type
PolarDB for MySQL Cluster Edition supports two sub-series: General Specification and Dedicated Specification.
Dedicated: Each cluster exclusively uses its allocated computing resources, such as CPUs. Resources are not shared with other clusters on the same server. This provides higher performance stability and reliability.
General-purpose: Idle computing resources, such as CPUs, are shared among different clusters on the same server. This resource sharing model is more cost-effective.
For a detailed comparison of the two types, see Comparison between General-purpose and Dedicated specifications.
CPU Architecture
This parameter inherits the setting from the source cluster and cannot be changed.
Node Specification
Select a node specification. Different specifications provide different maximum storage capacities and performance levels. For more information, see Compute node specifications for Enterprise Edition.
NoteTo ensure that the restored cluster runs as expected, select a node specification that is the same as or higher than that of the source cluster.
Number of Nodes
If your source cluster is of Cluster Edition, two nodes (one read/write node and one read-only node) are displayed by default. You can select two nodes (one read/write node and one read-only node) or one node (a read/write node).
If your source cluster is of Multi-master Cluster (Limitless) Edition, the system creates two primary nodes with the same specifications by default. You do not need to configure this parameter.
Database Proxy Type
PolarDB supports two database proxy types: Standard Enterprise Edition and Dedicated Enterprise Edition.
Standard Enterprise Edition: This type is used with the General-purpose sub-series. It shares physical CPU resources and provides intelligent, second-level resource scaling based on workloads.
Dedicated Enterprise Edition: This type is used with the Dedicated sub-series. It exclusively uses physical CPU resources and provides better performance stability.
NoteThe PolarProxy Enterprise Edition is currently free of charge. The billing start time is yet to be determined.
Enable Hot Standby Cluster
PolarDB provides multiple high availability modes. After you enable the hot standby storage cluster feature for a PolarDB cluster, a hot standby storage cluster is created in the secondary zone of the region in which the PolarDB cluster resides or in a different data center in the same zone. The hot standby storage cluster has independent storage resources. Whether the hot standby storage cluster has independent compute resources varies based on the high availability mode. When the PolarDB cluster in the primary zone fails, the hot standby storage cluster immediately takes over and handles read and write operations and storage tasks.
NoteFor more information about the hot standby storage cluster and related solutions, see High availability modes (hot standby clusters).
Rules for changing high availability modes:
You cannot directly change the high availability mode of a cluster from Double Zones (Hot Standby Storage Cluster Enabled) or Double Zones (Hot Standby Storage and Compute Clusters Enabled) to Single Zone (Hot Standby Storage Cluster Disabled).
For such change of the high availability mode, we recommend that you purchase a new cluster and select the Single Zone (Hot Standby Storage Cluster Disabled) high availability mode for the cluster. Then, migrate the existing cluster to the new cluster by using Data Transmission Service (DTS). For information about how to migrate an existing cluster to a new cluster, see Migrate data between PolarDB for MySQL clusters.
You can select the Three Zones high availability mode only when you purchase a new cluster. You cannot change the high availability mode of a cluster from Three Zones to other high availability modes and vice versa.
You can manually change the high availability mode of a cluster from Single Zone (Hot Standby Storage Cluster Disabled) to a different high availability mode. For more information, see High availability modes (hot standby clusters).
Three-AZ Strong Consistency Deployment
Specifies whether to enable three-AZ strong consistency deployment.
Storage Class
The storage class of the new cluster is the same as that of the source cluster. If the source cluster uses ESSD, you can select only ESSD. If the source cluster uses PSL4 or PSL5, you can select PSL4 or PSL5.
ESSDs are ultra-high performance disks developed by Alibaba Cloud. ESSDs use a next-generation distributed block storage architecture and support 25 Gigabit Ethernet networks and Remote Direct Memory Access (RDMA). Each ESSD has low one-way latency and can deliver up to 1 million random read/write IOPS. ESSDs are divided into the following categories:
PL0 ESSD: Basic performance level.
PL1 ESSD: Delivers 5× higher IOPS and ~2× higher throughput than PL0.
PL2 ESSD: Delivers ~2× higher IOPS and throughput than PL1.
PL3 ESSD: Delivers up to 10× higher IOPS and 5× higher throughput than PL2, ideal for scenarios requiring extreme concurrent I/O performance and stable low read/write latency.
ESSD AutoPL disk: Decouples IOPS from capacity, allowing flexible configuration and on-demand adjustments to reduce total cost of ownership (TCO).
ImportantFor ESSD performance details, see ESSD.
When a disk's storage space is full, the disk is locked and becomes read-only.
To avoid service disruption, you can enable automatic ESSD storage expansion.
PSL4 and PSL5 are storage types designed by PolarDB for different scenarios. Differences are as follows:
Storage class
Features
Scenarios
PSL5 (PolarStore Level 5)
A storage class supported in earlier versions of PolarDB. It is the default storage class for PolarDB clusters purchased before June 7, 2022. It provides better performance, reliability, and availability.
Business scenarios that require high performance and reliability, where the database is a core system. Such scenarios include finance, e-commerce, government services, and medium-to-large Internet businesses.
PSL4 (PolarStore Level 4)
PolarDB launched this new storage class that uses Alibaba's proprietary smart-SSD technology. This technology compresses and decompresses data at the physical SSD disk layer. This lowers the storage price per unit of data while keeping the performance impact under control.
Application scenarios that require cost reduction and high cost-effectiveness.
NoteStorage class conversion rules:
Some product series support storage class upgrades, which means that PSL4 storage can be upgraded to PSL5 storage.
Downgrading the storage class is not supported. You cannot downgrade PSL5 storage to PSL4 storage.
To switch from PSL5 storage to PSL4 storage, you can purchase a new cluster and migrate the data from the original cluster to the new cluster using a migration tool such as DTS or the major version upgrade feature.
Storage Engine
PolarDB supports two storage engine types: InnoDB and InnoDB & X-Engine.
InnoDB: the InnoDB engine.
InnoDB & X-Engine:: a hybrid deployment of InnoDB and X-Engine. If you select this option, you can set the proportion of the high-compression engine. For more information about the high-compression engine, see High-compression engine (X-Engine).
NoteThis parameter is not supported by PolarDB for MySQL Standard Edition.
Storage Billing Method
PolarDB supports two storage billing methods: Pay-as-you-go and Subscription.
Pay-as-you-go: This method uses a serverless architecture. You do not need to specify a storage capacity at the time of purchase. The storage capacity automatically scales as your data grows, and you are charged only for the actual storage space that you use. For more information, see Pricing for pay-by-capacity (pay-as-you-go).
Subscription: You must pay for the storage space of the cluster in advance. For more information, see Pricing for pay-by-space (subscription).
NoteIf you set Billing Method to Subscription, you can set Storage Payment Method to Pay-as-you-go or Subscription. If you set Billing Method to Pay-as-you-go, this parameter is not supported. The storage is pay-as-you-go by default.
Storage Capacity
The storage capacity that you want to purchase for a Subscription cluster. The storage capacity ranges from 50 GB to 500 TB. The minimum increment is 10 GB.
NoteThis parameter is available only when you set Storage Payment Method to Subscription.
Storage Fee
You do not need to specify a storage capacity when you purchase the cluster. PolarDB charges you for the actual storage usage on an hourly basis.
Enable Binlog
Specifies whether to enable binary logging. For more information about binary logging, see Enable binary logging.
Cluster Name
Enter a name for the cluster. The name must meet the following requirements:
It cannot start with
http://orhttps://.It must be 2 to 256 characters in length.
If you leave this parameter empty, the system automatically generates a cluster name. You can change the cluster name after the cluster is created.
Inherit Tags from Source Cluster
Specifies whether to inherit the tags from the source cluster.
Duration
Select a subscription duration for the cluster.
NoteThis parameter is available only when you set Billing Method to Subscription.
Quantity
Select the number of clusters that you want to purchase.
After you configure the parameters, confirm the cluster configuration and fees, and then read the service agreement. If the configuration is correct, click Buy Now.
After you complete the purchase, it takes 10 to 15 minutes to create the cluster. You can then view the new cluster in the cluster list on the PolarDB console.
NoteIf the status of a node in the cluster is Creating, the cluster is still being created and is unavailable. The cluster is ready for use only when its status changes to Running.
Make sure that you have selected the correct region in the upper-left corner of the console. Otherwise, you cannot view the created cluster.
Related APIs
API | Description |
You can call the CreateDBCluster operation to create a PolarDB cluster by restoring data from a backup set. Note
|