All Products
Search
Document Center

PolarDB:Create a serverless cluster

Last Updated:Nov 12, 2024

When you create a serverless cluster, you do not need to specify the specifications for each compute node. You only need to specify the maximum and minimum numbers of read-only nodes and the maximum and minimum numbers of PolarDB capacity units (PCUs) for a single node. PolarDB automatically scales the primary or read-only nodes of the serverless cluster within the specified range. This topic describes how to create a serverless cluster.

Note
  • If you use an existing PolarDB for MySQL cluster, you can enable the serverless feature for the cluster. For more information, see Enable the serverless feature for a cluster with defined specifications.

  • If you use other database services and want to use the serverless feature, you can use Data Transmission Service (DTS) to migrate data from your existing database services to a new serverless cluster. For more information, see Overview.

  • You cannot enable the In-Memory Column Index (IMCI) feature for an existing serverless cluster. You can enable the serverless feature on an existing read-only column store node or when you create a read-only column store node. For more information, see Enable the serverless feature on a read-only column store node.

Prerequisites

An Alibaba Cloud account is created and used to log on to the PolarDB console. For more information, see Register and log on to an Alibaba Cloud account.

Step 1: Complete the basic settings

In this step, you can configure the basic parameters and resources that are required to purchase a cluster. The basic parameters include Billing Method, Region, and Zone. The basic resources include the database engine, scaling range of resources, and storage. After you complete the settings in this step, click Next.

  1. Go to the PolarDB cluster purchase page.

  2. Set Billing Method to Serverless.

  3. Configure the Region parameter.

    Select a region that is closer in proximity to reduce network latency. After a cluster is created, you cannot change the region of the cluster. For more information, see Regions and zones.

    Note
    • Make sure that the PolarDB cluster and the Elastic Compute Service (ECS) instance to which you want to connect are deployed in the same region. Otherwise, the PolarDB cluster and the ECS instance can communicate only over the Internet, which degrades cluster performance.

    • You can deploy the PolarDB cluster and the ECS instance in the same zone or in different zones. If you create a PolarDB cluster in the same zone as the ECS instance, the network latency is reduced and the access speed is increased.

  4. Set Creation Method to Create Primary Cluster.

  5. In the Database Engine dropdown list, choose one of the following items: MySQL 8.0.2, MySQL 8.0.1, or MySQL 5.7.

  6. Product Edition: Select Enterprise or Standard Edition.

  7. Set Primary Zone.

    • A zone is an independent geographical location in a region. All zones in a region provide the same level of service performance.

    • You can deploy the PolarDB cluster and the ECS instance in the same zone or in different zones.

    • You need to specify only the primary zone. The system automatically selects a secondary zone.

  8. Set Enable Hot Standby Cluster.

    PolarDB provides multiple high availability modes. After you enable the hot standby storage cluster feature for a PolarDB cluster, a hot standby storage cluster is created in the secondary zone of the region in which the PolarDB cluster resides or in a different data center in the same zone. The hot standby storage cluster has independent storage resources. Whether the hot standby storage cluster has independent compute resources varies based on the high availability mode. When the PolarDB cluster in the primary zone fails, the hot standby storage cluster immediately takes over and handles read and write operations and storage tasks.

    Note
    • For more information about the hot standby storage cluster and related solutions, see High availability modes (hot standby clusters).

    • Rules for changing high availability modes:

      • You cannot directly change the high availability mode of a cluster from Double Zones (Hot Standby Storage Cluster Enabled) or Double Zones (Hot Standby Storage and Compute Clusters Enabled) to Single Zone (Hot Standby Storage Cluster Disabled).

        For such change of the high availability mode, we recommend that you purchase a new cluster and select the Single Zone (Hot Standby Storage Cluster Disabled) high availability mode for the cluster. Then, migrate the existing cluster to the new cluster by using Data Transmission Service (DTS). For information about how to migrate an existing cluster to a new cluster, see Migrate data between PolarDB for MySQL clusters.

      • You can select the Three Zones high availability mode only when you purchase a new cluster. You cannot change the high availability mode of a cluster from Three Zones to other high availability modes and vice versa.

    • You can manually change the high availability mode of a cluster from Single Zone (Hot Standby Storage Cluster Disabled) to a different high availability mode. For more information, see High availability modes (hot standby clusters).

  9. Set the scaling limits for resources in the serverless cluster.

    • Minimum Read-only Nodes: the minimum number of read-only nodes that can be added. Valid values: 0 to 15.

      Note

      To ensure high availability of the serverless cluster, we recommend that you have at least one minimum read-only node.

    • Maximum Read-only Nodes: the maximum number of read-only nodes that can be added. The number of read-only nodes automatically scales up or down depending on your workloads. Valid values: 0 to 15.

    • Minimum PCUs per Node: the minimum number of PCUs per node in the cluster. PolarDB serverless clusters perform per-second billing and scaling by measuring the usage of PCUs. A PCU is approximately equal to 1 core and 2 GB memory. The number of PCUs dynamically increases or decreases within the specified range based on your workloads. Valid values: 1 PCU to 31 PCUs.

    • Maximum PCUs per Node: the maximum number of PCUs per node in the cluster. PolarDB serverless clusters perform per-second billing and scaling by measuring the usage of PCUs. A PCU is approximately equal to 1 core and 2 GB memory. The number of PCUs dynamically increases or decreases within the specified range based on your workloads. Valid values: 1 PCU to 32 PCUs.

  10. Specifies whether to enable the no-activity suspension feature. It is disabled by default.

  11. Set Storage Type.

    PolarDB for MySQL Enterprise Edition supports the PSL5 and PSL4 storage types:

    • PSL5: the default storage type of PolarDB clusters purchased before June 7, 2022. PSL5 provides higher performance, reliability, and availability.

    • PSL4: a new storage type for PolarDB. PSL4 uses the Smart-SSD technology developed in-house by Alibaba Cloud to compress and decompress data that is stored on SSDs. PSL4 can minimize the storage costs of data while maintaining a high disk performance.

      Note

      You cannot change the storage type of existing clusters. To change the storage type of an existing cluster, we recommend that you purchase a new cluster and configure the cluster by using the desired storage type, and then migrate data from the existing cluster to the new cluster.

    For more information, see How do I select between PSL4 and PSL5?.

    In addition to PSL5 and PSL4, PolarDB for MySQL Standard Edition supports PL0 to PL3 ESSDs and ESSD AutoPL disks.

    ESSDs are ultra-high performance disks developed by Alibaba Cloud. ESSDs use a next-generation distributed block storage architecture and support 25 Gigabit Ethernet networks and Remote Direct Memory Access (RDMA). Each ESSD has low one-way latency and can deliver up to 1 million random read/write IOPS. ESSDs are provided at the following performance levels (PLs):

    • PL0 ESSD: A PL0 ESSD delivers the basic performance of an ESSD.

    • PL1 ESSD: A PL1 ESSD delivers IOPS that is five times that delivered by a PL0 ESSD and throughput that is approximately twice that delivered by the PL0 ESSD.

    • PL2 ESSD: A PL2 ESSD delivers IOPS and throughput that are approximately twice the IOPS and throughput delivered by a PL1 ESSD.

    • PL3 ESSD: A PL3 ESSD delivers IOPS that is up to ten times that delivered by a PL2 ESSD and throughput that is up to five times that delivered by the PL2 ESSD. The ESSDs are suitable for business scenarios in which highly concurrent requests must be processed with high I/O performance and at low read and write latencies.

    • ESSD AutoPL disk: Compared with an ESSD at one of the preceding PLs, an ESSD AutoPL disk decouples IOPS and storage, gives you the flexibility to configure IOPS and storage, and allows you to make some adjustments as needed. This reduces the overall Total Cost of Ownership (TCO).

    For more information about the performance of ESSDs, see ESSDs.

    Important

    After the storage of an ESSD is exhausted, the disk is locked. In this case, the disk handles only read operations.

    If you select ESSD AutoPL, you can configure the Provisioned IOPS for AutoPL parameter to increase the input/output operations per second (IOPS) of the ESSD AutoPL disk from the initial maximum value of 50,000. The maximum value of the parameter is 50,000. Therefore, the maximum IOPS of an ESSD AutoPL disk can reach 100,000 in theory.

Step 2: Complete the cluster configurations

In this step, set the cluster name, network type, parameter template, and table name case sensitivity. After you complete the cluster configurations, click Next: Confirm Order.

  1. Specify Cluster Name with an auto-generated name or a custom one.

    An auto-generated name is produced by the system and can be modified after cluster creation. A custom cluster name must meet the following requirements:

    • Cannot start with http:// or https://.

    • Be 2 to 256 characters in length.

    • Start with a letter. Can contain letters, digits, periods (.), underscores (_), and hyphens (-).

  2. Set Resource Group.

    Select a resource group from the drop-down list. For more information, see Create a resource group.

    Note

    A resource group is a group of relevant resources that belong to an Alibaba Cloud account. Resource groups allow you to manage resources in a centralized manner. A resource belongs to only one resource group. For more information, see Classify resources into resource groups and grant permissions on the resource groups.

  3. Configure a virtual private cloud (VPC) and a vSwitch.

    The network type is fixed to VPC. You do not need to configure this parameter. Make sure that the PolarDB cluster is created in the same VPC as the ECS instance to which you want to connect. Otherwise, the cluster and the ECS instance cannot communicate over the internal network to achieve optimal performance.

    • If you have an existing VPC that meets your network requirements, select the VPC. For example, if you have an existing ECS instance and the VPC to which the ECS instance belongs meets your network requirements, select this VPC.

    • Otherwise, use the default VPC and the default vSwitch.

      • Default VPC:

        • Only one VPC is specified as the default VPC in the region that you select.

        • The CIDR block of the default VPC uses a 16-bit subnet mask. For example, the CIDR block of the default VPC can be 192.168.0.0/16. This CIDR block provides up to 65,536 private IP addresses.

        • The default VPC does not count towards the quota of VPCs that you can create on Alibaba Cloud.

      • Default vSwitch:

        • Only one vSwitch is specified as the default vSwitch in the zone that you select.

        • The CIDR block of the default vSwitch uses a 20-bit subnet mask. For example, the CIDR block of the default vSwitch can be 192.168.0.0/20. This CIDR block provides up to 4,096 private IP addresses.

        • The default vSwitch does not count towards the quota of vSwitches that you can create in a VPC.

    • If the default VPC and vSwitch cannot meet your business requirements, you can create your own VPC and vSwitch. For more information, see Create and manage a VPC.

  4. Set Time Zone.

    The time zone of the cluster. The default value is UTC+08:00.

  5. Set the case sensitivity of Table Name.

    You can specify whether table names in the cluster are case-sensitive. The default value of this parameter is Case-insensitive (Default). If table names in your on-premises database are case-sensitive, select Case-sensitive to facilitate data migration.

    Note

    After the cluster is created, you cannot change the value of this parameter. We recommend that you configure this parameter based on your business requirements.

Step 3: Confirm the order

Before the cluster is created, make sure that the selected configurations, such as the quantity, meet your requirements.

  1. Check the selected settings.

    To modify the settings in a step, click the edit icon.

  2. Set Quantity.

    You can create a maximum of 50 clusters at a time. This allows you to create multiple clusters in specific scenarios. For example, you can deploy multiple game servers at a time.

  3. Read and select the Terms of Service.

  4. View the fee and details in the lower part of the page. If they are correct, click Buy Now.

    After you complete the payment, wait 10 to 15 minutes. Then, you can view the newly created cluster on your PolarDB console.

    Note
    • If specific nodes in the cluster are in the Creating state, the cluster is being created and is unavailable. The cluster is available only when it is in the Running state.

    • Make sure that you select the region in which the cluster is deployed when you view the cluster. Otherwise, the cluster is not displayed.