This is the first post in Seven Major Cloud Native Trends for 2020 Series
In 2019, the capabilities of major serverless computing platforms in the industry were greatly improved and became more versatile. For example, reserved resources are used to eliminate the impact of cold start on latency, so that latency-sensitive online applications can also be built in serverless mode. The serverless ecosystem is continuously developing. Many open source projects and startups have emerged in the fields of application construction, security, monitoring, and alerting, and the toolchain is becoming increasingly sophisticated.
Users' acceptance of serverless is constantly increasing. In addition to industries such as the Internet that quickly embrace new technologies, traditional enterprise users have also begun to adopt serverless technology. As we enter a new decade, we expect the serverless field to evolve along the following routes:
Billing based on requests and the response time are intrinsically contradictory. Serverless technologies, such as FaaS, were initially insensitive to the response time, and therefore they were first used for event-driven offline services. However, we can now see that products such as AWS Lambda Provisioned Capacity and Azure Functions Premium Plan allow users to pay a little extra for shorter response times. This undoubtedly makes serverless more suitable for online businesses.
After business code is hosted on a serverless platform, you can enjoy automatic scaling and pay-as-you-go billing. However, if the infrastructure and related services do not provide real-time scalability, the business is not flexible as a whole. AWS has put a lot of work into improving the real-time elasticity of resources, such as VPC networks and database connection pools, in Lambda. We believe that other vendors will follow soon, which accelerates the progress of the industry as a whole toward serverless infrastructure and cloud services.
Although cloud vendors are vigorously promoting their serverless products, developers are generally worried about being restricted by vendors. Therefore, organizations of a certain scale will use open source solutions, such as Knative, to build their own serverless platforms. Once an open source solution becomes mainstream, cloud vendors will take the initiative to provide compatibility with open source standards and increase their investment in the open source community.
IDE, problem diagnosis, continuous integration and release, and other supporting tools and services will provide a more complete user experience. We will see more success stories and best practices as well. Serverless application frameworks will emerge in front-end development and other fields, maximizing engineering efficiency.
Serverless platforms require application images to be small enough for fast distribution and also a short application startup time. Although languages such as Java, NodeJS, and Python differ in these aspects, the Java community is working hard to succeed in this area. We can see that Java is continuously trying to "lose weight" through technologies such as Java 9 Modules and GraalVM Native Images. Spring, the mainstream framework, has also begun to embrace GraalVM, and new frameworks, such as Prometheus and Micronaut, are making new breakthroughs. We are looking forward to the brand-new experience that Java will provide in the serverless field.
When serverless is used in function scenarios, the greatest challenge will be the latency amplification caused by the state transmission required by functions in a series and the frequent interaction with the internal storage required by function processing. In the traditional architecture, these processes are handled within a single program process. To solve these challenges, an intermediate computing layer (acceleration layer) is needed. This layer is one of the future directions of academic research and product development.
Solomon Hykes, one of the founders of Docker, once said, "If WASM and WASI were around in 2008, we wouldn't have needed to create Docker." This illustrates the importance of WASM. Although WASM is widely considered a browser technology, it provides excellent security isolation, an extremely fast startup speed, and support for more than 20 languages. Then, why cannot we run it on the server? These technical features perfectly suit the needs of FaaS.
Knative Eventing Hello World: An Introduction to Knative
This article illustrates how to obtain events in Knative Eventing and pass them to Knative Serving for consumption using the Kubernetes Event Source example.
Full Lifecycle Observability and How It Supports Double 11
Read how Alibaba's EagleEye team helped to create an Intelligent Fault Location system to improve the observability of Alibaba's systems.
506 posts | 48 followers
FollowAlibaba Developer - March 3, 2020
Alibaba Developer - March 3, 2020
Alibaba Developer - March 3, 2020
Alibaba Developer - March 3, 2020
Alibaba Developer - March 3, 2020
Alibaba Developer - March 3, 2020
506 posts | 48 followers
FollowAlibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.
Learn MoreProvides a control plane to allow users to manage Kubernetes clusters that run based on different infrastructure resources
Learn MoreAccelerate and secure the development, deployment, and management of containerized applications cost-effectively.
Learn MoreVisualization, O&M-free orchestration, and Coordination of Stateful Application Scenarios
Learn MoreMore Posts by Alibaba Cloud Native Community