×
Community Blog Breaking the Serverless Implementation Boundary: Alibaba Cloud SAE Releases Five New Features

Breaking the Serverless Implementation Boundary: Alibaba Cloud SAE Releases Five New Features

This article discusses the release of five new features for Alibaba Cloud Serverless App Engine (SAE).

Catch the replay of the Apsara Conference 2021 at this link!

By Aliware

Are open-source and exclusive construction the fastest, most economical, and most stable solutions for microservice scenarios? Will complexity become a "fatal defect" to Kubernetes? Is containerization of enterprise applications only possible through Kubernetes? Is Java microservices still far away? Serverless application scenarios are not complex but mostly non-core scenarios with simple logic, such as mini-programs, ETL, and scheduled backup.

At the Apsara Conference 2021, Ding Yu (Shutong), a researcher at Alibaba and General Manager of the Cloud-Native Application Platform of Alibaba Cloud Intelligence, released new product positioning and five new features of Serverless App Engine (SAE). This article answers the abovementioned questions from Alibaba.

1

1. From Dedicated to General-Purpose: SAE Is Naturally Suitable for the Large-Scale Implementation of Enterprise Core Businesses

Different from Serverless in FaaS form, SAE is application-centered and provides application-oriented UI and API. It does not change the application programming model and deployment method. This ensures a consistent development and deployment experience on traditional servers for customers. It can be used to carry out local development, debugging, and monitoring conveniently. This reduces the threshold for customers to use Serverless and achieves zero transformation and smooth migration for enterprise online applications. This is also the reason SAE helps Serverless transform from a dedicated service to a universal one, breaking the implementation boundary of Serverless. Serverless is used in frontend, full-stack, and small programs, and backend microservices, SaaS services, and IoT applications can also be built on Serverless. Serverless is naturally suitable for the large-scale implementation of enterprise core businesses.

2. From Complex to Simple: SAE Is Naturally Suitable for Zero-Threshold Containerization of Enterprises

Different from open-source and self-built microservices, SAE provides a full set of microservice governance capabilities with out-of-the-box features that have been testified by Double 11. Customers do not need to consider framework selection, not to mention data isolation, distributed transactions, circuit breaking, and throttling and degradation. They also don't need to worry about secondary customized development because of limited community maintenance efforts. With Serverless, you can achieve seamless migration of Spring Cloud/Dubbo without transformation. Based on making SAE open-source, we have also enhanced advanced features, such as lossless enabling and disabling, service authentication, and full-process grayscale. With SAE, users do not have to pay attention to the technical details of Kubernetes. Enterprise applications could enjoy zero-threshold containerization. SAE allows users to build images automatically. Moreover, various methods, such as WAR, JAR, PHP, and zip packages, are provided to lower the threshold for customers to generate Docker images. With SAE, users do not have to consider complex network and storage plug-in adaptation of K8s. This helps each application instance match with an IP address that is interconnected in VPC for data persistence in the storage system. With SAE, users do not have to worry about the upgrading and O&M of Kubernetes and the stability risks caused by version upgrades. Users also do not have to pay attention to the process of Kubernetes connecting to monitoring components and elastic controllers. SAE provides white-screen end-to-end observability and flexible and diverse elastic policy configurations. Users can continue using the original packaging and deployment methods to enjoy the benefits of Kubernetes.

3. Five New Features: Highlight the New Advantages of Serverless and Extend the Boundary of Serverless

  • Elasticity 2.0 is the first hybrid elastic strategy in the industry, supporting the mixing of timing and metric strategies. Based on the capabilities of open-source Kubernetes, SAE enriches TCP connections and metrics trigger elasticity of SLB QPS/RT. SAE also supports setting the step size of scaling, cooldown time, and other advanced elastic settings.
  • The Speed of Java Cold Start Is Improved by 40%: Based on the enhanced AppCDS start-up acceleration technology of Alibaba Dragonwell 11, the process of the first start of the application is saved in the cache. Later, users can start the application through the cache. Compared with standard OpenJDK, the speed of cold start is improved by 40%.
  • Extreme Deployment Efficiency (15 Seconds): Based on the underlying full-process upgrades, security sandbox container 2.0, and image acceleration, the extreme end-to-end deployment consumes 15 seconds.
  • All-in-One PHP Application Hosting: Support direct deployment of SAE with PHP zip packages and provide runtime environment selection and application monitoring capabilities of PHP
  • Richer Developer Tool Chain: In addition to developer tools, such as Cloudtoolkit, CLI, and VSCode, Terraform and Serverless Devs are newly supported. Based on resource orchestration capabilities, SAE applications and related cloud resources can be deployed conveniently, making environment building easier.

4. Four Best Practices Represent All Models on Serverless

  • Microservice Model Transformation with Low Threshold

Serverless is faster, more economical, and more stable than open-source self-built microservices. With the rapid growth of business, many enterprises are facing the problem of transforming into the microservice model or the problem that self-built microservices cannot meet the needs of enterprise stability and diversification. Customers' costs of learning, research, and development are reduced through the full set of out-of-the-box microservice capabilities of SAE. The stability of SAE has been testified by Double 11. With SAE, enterprises can complete the transformation to the microservice model and the launch of new businesses quickly. This is also the scenario where SAE is most widely used. It can be said that SAE is the best Serverless practice in the microservice field.

  • Start and Stop the Development and Test Environments with One Click

Medium and large enterprises usually have several environments, and development and pre-release environments are often not used 24/7. The cost of keeping application instances for a long time is very high, and the CPU utilization rate of some enterprises is close to zero. Their demand for cost reduction is clear. These enterprises have been able to flexibly release resources on-demand through the one-click start-stop capability of SAE. The development and testing environments alone can save two-thirds of the machine cost, which is very considerable. Next, we will use the orchestration capability of Kubernetes to orchestrate the dependencies of applications and resources. We will initialize a set of environments and clone environments with one click.

  • Full-Process Grayscale

It is more powerful than the grayscale capability provided by open-source Kubernetes Ingress. SAE combines the customer scenario characteristics of the PaaS layer to realize the seven-layer traffic grayscale of Kubernetes Ingress. It also realizes the full-process grayscale from frontend traffic to interfaces and method levels of multiple cascading microservices. Deployment and O&M are more convenient than the original solution. In the past, customers needed to deploy multiple applications with two namespaces and use two complete sets of environments to implement formal and grayscale releases. This solution brings high hardware cost and troublesome deployment and O&M. Based on SAE, customers only need to deploy one set of environments. The specified traffic is directed to a special instance by configuring some grayscale rules, which is cascaded layer by layer. This limits the explosion radius and reduces hardware costs.

  • Use SAE as an Elastic Resource Pool to Optimize Resource Utilization

Most customers will use full SAE, and a small number of customers will put the normal retention part of the same business on ECS. They use SAE as an elastic resource pool and deploy SAE and ECS in combination. Customers only need to ensure that both ECS and SAE instances of the same application are mounted to the backend of the same SLB instance and set with the appropriate weight ratio. The microservice applications must also be registered to the same registry. In addition, the customer's self-built release system should be reused to ensure that the versions of SAE instances and ECS instances are the same in each release. We need to reuse users' monitoring system, send the monitoring data of SAE to the monitoring system through OpenAPI and reorganize it together with the monitoring data of ECS. When traffic peaks arrive, the elasticity module wards off all elastic instances to the SAE system, improving the efficiency of elastic scale-out and reducing costs. This hybrid solution also applies to the process when migrating from ECS mode to SAE. It is used as an intermediate transition solution to improve stability during migration. The five new features and four best practices of SAE have broken the boundary of implementing Serverless. They make application containerization faster and Kubernetes implementation easier, allowing containers, Serverless, and PaaS to be integrated. Advanced technology, optimized resource utilization, and superior development and O&M experience can also be integrated.

0 0 0
Share on

Alibaba Cloud Community

1,076 posts | 263 followers

You may also like

Comments

Alibaba Cloud Community

1,076 posts | 263 followers

Related Products

  • ACK One

    Provides a control plane to allow users to manage Kubernetes clusters that run based on different infrastructure resources

    Learn More
  • Function Compute

    Alibaba Cloud Function Compute is a fully-managed event-driven compute service. It allows you to focus on writing and uploading code without the need to manage infrastructure such as servers.

    Learn More
  • Container Service for Kubernetes

    Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.

    Learn More
  • Serverless Workflow

    Visualization, O&M-free orchestration, and Coordination of Stateful Application Scenarios

    Learn More