Serverless ApsaraDB RDS for PostgreSQL instances provide real-time scaling to respond to changing business requirements, allow you to quickly and independently scale computing resources to adapt to fluctuating workloads, prevent inefficient use of resources, and reduce O&M costs. This topic describes the features and the architecture of serverless RDS instances and how to use serverless RDS instances. Serverless RDS instances help reduce costs and improve the O&M efficiency.
Introduction
Serverless RDS instances are a new deployment model for RDS instances equipped with cloud disks, allowing you to scale CPU and memory resources in real time. Serverless RDS instances also allow you to isolate network resources, namespaces, and storage resources. This deployment model allows pay-as-you-go billing for computing resources, and allows you to quickly and independently scale computing resources to adapt to fluctuating workloads. These benefits simplify rightsizing your RDS instances, helping you reduce costs and improve efficiency.
A serverless RDS instance is billed based on RDS Capacity Units (RCUs). The performance of an RCU is equivalent to the performance of an RDS instance that provides up to 1 core and 2 GB of memory. A serverless RDS instance automatically adjusts the number of RCUs within the range that you specify based on your workloads.
The maximum number of connections to a serverless RDS instance is fixed as 2,400, which cannot be modified and does not vary based on the number of RCUs.
The following figure compares the rate of resource utilization between a regular RDS instance and a serverless RDS instance with fluctuating workloads.
Referring to the preceding figure, we can obtain the following conclusions:
Regular RDS instance: Low resource utilization during off-peak hours translates into wasted costs, while insufficient resources during peak hours affect service performance.
Serverless RDS instance:
Resources are scaled in response to changes in the workload. This minimizes the amount of idle resources and maintains the resource utilization rate.
Resources are scaled to match the workload requirements during peak hours, which ensures performance and service stability.
You are charged based on the actual volume of resources that are used to run your workloads. This significantly reduces costs.
No human intervention is required. This improves O&M efficiency and reduces costs for O&M administrators and developers.
The automatic start and stop feature is supported for the serverless RDS instance. If no connections are established to the serverless RDS instance, the instance is automatically suspended to release computing resources and reduce costs. If a connection is established to the serverless RDS instance, the instance automatically resumes.
The serverless RDS instance supports auto scaling and is optimized for high-throughput write operations and high-concurrency processing operations. This is suitable for scenarios in which large amounts of data and large traffic fluctuations are involved.
Benefits
Low cost: Serverless RDS instances do not rely on other infrastructure and services and can provide stable and efficient data storage and access services. This deployment model is ideal for startups and scenarios in which you want to run workloads immediately after resources are created. You are charged only for the resources that you use based on the pay-as-you-go billing method.
Large storage capacity: You can purchase a storage capacity of up to 32 TB for a serverless RDS instance. The system automatically expands the storage capacity based on the data volume of the RDS instance. This effectively prevents your services from being adversely affected by insufficient storage.
Auto scaling of computing resources: The computing resources that are required for read and write operations can be automatically scaled without human intervention. This greatly reduces O&M costs and risks of human errors.
Fully managed and maintenance-free services: Serverless RDS instances are fully managed by Alibaba Cloud, allowing you to focus on developing your applications instead of O&M operations, such as system deployment, scaling, and alert handling. The O&M operations are performed in the background and are transparent at the service layer.
Scenarios
Scenarios that require infrequent access to underlying databases, such as databases in development and testing environments.
Software as a service (SaaS) scenarios, such as website building of small and medium-sized enterprises
Individual developers
Educational scenarios, such as teaching and student experiments
Scenarios that handle inconsistent and unpredictable workloads, such as IoT and edge computing
Scenarios which require fully managed or maintenance-free services
Scenarios in which services are changing or unpredictable
Scenarios in which intermittent scheduled tasks are involved