The Alibaba Cloud 2021 Double 11 Cloud Services Sale is live now! For a limited time only you can turbocharge your cloud journey with core Alibaba Cloud products available from just $1, while you can win up to $1,111 in cash plus $1,111 in Alibaba Cloud credits in the Number Guessing Contest.
By Alibaba Cloud ApsaraDB Team
Whenever there's a large event, such as the Double 11 Shopping Festival or during the Spring Festival holiday season, large amounts of computing resources are required to support spikes in user traffic. To ensure smooth and stable operations of all services on Alibaba Cloud, Elastic Compute Service (ECS) servers and ApsaraDB for RDS databases need to cope with these peaks and fluctuations. Achieving this on a traditional cloud architecture is challenging, which is why Alibaba Cloud created PolarDB to provide minute-level elastic scaling for such scenarios.
Perhaps the greatest feature of Alibaba Cloud PolarDB is the separation of storage and compute resources. Specifically, the compute node (DB Engine) and the storage node (DB Store) are on different physical servers. All I/O operations that go to the storage device are network I/O operations. Some may ask about the network latency and performance. When comparing the latency comparison between using PolarFS to write three data block replicas to PolarStore over the network and writing one data block replica to a local SSD, the results are very close.
PolarDB's storage and compute separation architecture reduces storage costs, ensures high data consistency between the master and backup data, and prevents data loss. In addition, it has a huge advantage that it makes "elastic scaling" of the database extremely simple and convenient.
Elastic scaling is a major feature of the cloud that attracts many people to migrate their IT systems to the cloud. However, elastic scaling of the database has always been an industry pain point. Unlike ECS instances that purely provide computing services, database elastic scaling has the following difficulties:
Now, when the bottleneck is gone as a result of storage and compute separation, we can finally make new progress in the field of database elastic scaling by combining the architecture design of multiple nodes sharing the same data.
As shown above, PolarDB is a layered architecture. In the top layer, PolarProxy provides read-write separation and SQL acceleration functions. In the middle layer, the database engine node PolarDB has a multiple-read single-write (MRSW) database cluster. In the bottom layer, the distributed storage PolarStore provides multi-node shared storage for the top layer. Each of these three layers have different roles. Put together, they make up a PolarDB cloud database cluster.
From the product definition of PolarDB, we can see that the node count and specification (such as 4-core, 16 GB), for which a user pays, refer to the configuration of PolarDB in the middle layer. PolarProxy in the top layer is adaptively adjusted according to the PolarDB configuration. Users do not need to pay for it, or be concerned about its performance or capacity. The storage of PolarStore in the bottom layer is resized automatically. Users only need to pay for the actual volume used.
Generally speaking, there are two types of database scalability—vertical scaling (also known as scaling up) and horizontal scaling (also known as scaling out). Vertical scaling is the process of upgrading the configuration, and horizontal scaling is the process of adding nodes without changing the configuration. For a database, we first use vertical scaling. For example, upgrading from 4 cores to 8 cores. However, there will be bottlenecks. On the one hand, the performance improvement is nonlinear, which is related to the database engine's own design and application access model (for example, the multi-threaded design of MySQL does not show the multi-core advantage if there is only one session). On the other hand, there are upper limits for the configuration of a physical computing server. Therefore, the ultimate means is to scale out by adding more nodes.
In short, PolarDB can be scaled out to a maximum of 16 nodes, and be scaled up to a maximum of 88 cores. The storage capacity is dynamically resized without additional configuration.
Thanks to the separation of storage and compute, we can scale up/down the configuration of a PolarDB database node separately. If it involves no data migration, the whole process takes only 5-10 minutes (being constantly optimized). If the current server resources are insufficient, we can also quickly migrate to another server. However, when it involves cross-server data migration, there may be tens of seconds of transient disconnection (in the future, this effect can be eliminated by PolarProxy, and the upgrade will have completely no impact on the business application).
Because all nodes in the same cluster must be bound for upgrade, we will adopt a very soft rolling upgrade method to further reduce the unavailable time by controlling the pace of the upgrade, and adjusting the master and backup node switches.
Because of shared storage, we can quickly add nodes without copying any data. The whole process only takes 5-10 minutes (being constantly optimized). Adding a node does not affect the business application. Removing a node only affects connections to the removed node, and the connections can be re-established later.
PolarProxy can dynamically detect the newly added node and automatically include it to the read nodes of the read/write splitting backend. This allows applications that are connected to PolarDB using the cluster access address (read/write splitting address) to immediately enjoy better performance and throughput.
You don't need to be concerned about the storage space of PolarDB. It is charged based on the actual volume used, which is automatically settled on an hourly basis.
In the current design, the I/O performance is related to the specification of the database nodes. The larger the specification, the higher the IOPS and I/O throughput. I/O operations are isolated and restricted on the nodes to avoid I/O contention among multiple database clusters.
Data is stored in a storage pool that is made up by a large number of servers. For the sake of reliability, each data block has three replicas to be stored in different servers that are mounted on different racks. The storage pool can manage itself—it dynamically resizes the storage, balances the load, and avoids storage fragmentation and data hotspots.
An online education company in Beijing deployed an online examination system for primary school students. There are 50,000 to 100,000 users simultaneously online on weekdays in ordinary time, 200,000 on weekends, and 500,000 to 1 million during the peak examination period. The data size is less than 500 GB. The main pain points are highly-concurrent user access, read-write contention, and high I/O. The cost will be too high to always buy the highest configuration. The elastic scaling capability of PolarDB allows the company to temporarily increase database configuration and cluster scale during peak days. This reduces the overall cost by 70% compared to the previous solution.
To learn more about Alibaba Cloud PolarDB, visit https://www.alibabacloud.com/blog/a-brief-history-of-development-of-alibaba-cloud-polardb_594254
2,599 posts | 762 followers
FollowAlibaba Clouder - July 30, 2020
Alibaba Clouder - March 24, 2020
Iain Ferguson - November 26, 2021
Alibaba Clouder - February 4, 2021
ApsaraDB - November 28, 2019
ApsaraDB - April 17, 2019
2,599 posts | 762 followers
FollowGet started on cloud with $1. Start your cloud innovation journey here and now.
Learn MoreCustomized infrastructure to ensure high availability, scalability and high-performance
Learn MoreAlibaba Cloud Function Compute is a fully-managed event-driven compute service. It allows you to focus on writing and uploading code without the need to manage infrastructure such as servers.
Learn MoreMore Posts by Alibaba Clouder