Past and present life of Serverless
From cloud computing to Serverless architecture
Hello, I'm Liu Yu, product manager of Alibaba Cloud Serverless. I'm glad to explore the past and present of the Serverless architecture with you.
From cloud computing to cloud native to Serverless architecture, there are certain rules to follow in the rapid development of technology. So why did Serverless architecture come into being?
The birth of cloud computing
Since the world's first general-purpose computer ENIAC, the development of computer science and technology has never stopped moving forward. In recent years, it has changed with each passing day. There are artificial intelligence fields that are constantly breaking through and innovating, Internet of Things fields where 5G brings more opportunities, and cloud computing that is constantly entering ordinary people's homes.
Three key words can be seen in the figure. This is from 2003 to 2006, Google published three important papers, which pointed out the technical foundation and future opportunities of HDFS (Distributed File System), MapReduce (Parallel Computing) and Hbase (Distributed Database), and formally laid the development direction of cloud computing. Regarding these three articles, or these three technical points, someone once said that "because of them, cloud computing has officially started".
The development of cloud computing is rapid and obvious to all; However, with the progress of cloud computing, another term was born and quickly occupied the "Tuyere Banner", which was more widely concerned by the public, that is, cloud native.
Through the analysis of the text composition structure of cloud computing and cloud native, we can see that cloud native is actually an addition of a native between cloud and computing. Therefore, we can think that the rapid development of cloud computing, whether from technology iteration or concept upgrading, has finally produced the familiar cloud native computing.
What is cloud computing? In fact, the embryonic concept of cloud computing was born as early as 1961. At the centennial ceremony of MIT, John McCarthy, the winner of the Turing Prize in 1971, put forward a concept for the first time. This concept was later described as the "initial and advanced" daydream model of cloud computing. In the future, computers will become a public resource, which will be used by everyone just like water, electricity and gas in life In 1996, the term cloud computing was formally proposed; In 2009, UC Berkeley (University of California, Berkeley) published a more detailed description of cloud computing in its paper. It said that cloud computing is an ancient dream to be realized and a new name for computing as an infrastructure, which has long been a dream. It is rapidly becoming a commercial reality. At the same time, this article also clearly defines cloud computing: cloud computing includes application services on the Internet, as well as software and hardware facilities that provide these services in the data center.
The fierceness of cloud origin
Today, cloud native technology is also developing rapidly, so what is cloud native? In the article "What is a real cloud native", a very clear explanation is given: software, hardware and architecture generated by the cloud are real cloud native; The technology generated by the cloud is the cloud native technology. Indeed, being born in the cloud, growing up in the cloud, and being born of the cloud is the cloud native.
What does Cloud Native include? The familiar technologies, together with the three words "cloud native", are all cloud native related technologies, such as: database, cloud native database; Network, cloud native network, etc. In CNCF Landscape, you can see a description of cloud native product dimensions by Cloud Native Foundation, including database, stream, message, container image, servicemesh, gateway, K8S, and of course, a very popular term: Serverless.
The appearance of Serverless architecture
In many cases, the Serverless architecture is called a kind of adhesive. It links many other cloud native products with users' businesses, and at the same time provides extremely attractive technical dividends. For this reason, it is also selected by many projects and businesses. What is the Serverless architecture?
Through the structure of Serverless, it is not difficult to find the mind to be transferred. Server refers to the server, and Less means less energy. Therefore, the mind transferred by the Serverless architecture is to give more professional things to more professional people, so that developers can pay less attention to the underlying related content such as servers, and put more energy on more valuable business logic.
In 2009, UC Berkeley published an article on cloud computing. In the article, UC Berkeley made a clear definition of cloud computing, and also proposed various difficulties and challenges faced by ten cloud computing, including service availability, data security and auditability, and asserted that cloud computing will lead the next decade.
In 2019, exactly ten years later, UC Berkeley again issued a document, which not only explained what Serverless architecture is from multiple perspectives, for example, from the structural perspective, it affirmed that Serverless is the combination of FaaS and BaaS; From the feature point of view, products or services that are considered to be Serverless architecture should have the characteristics of pay as you go and elastic scalability; It is also very radical to say that Serverless will become the default computing paradigm in the cloud era and will replace Serverful computing, which also means the end of the server client model.
From IaaS to PaaS to Serverless, the development of cloud computing is becoming more and more clear, and the de serveralization is also becoming more and more obvious.
Whether we are talking about cloud native or Serverless architecture at this time, the concept of cloud is indeed constantly upgrading, and cloud technology is also constantly iterative. All these changes are actually for efficiency improvement, security improvement, cost reduction, and productivity driven.
What is the Serverless architecture?
Although there is no clear definition of Serverless architecture, many people accept that Serverless is a combination of FaaS and BaaS. The so-called FaaS is a function as a service, while BaaS refers to the back-end as a service. Together, the two become an inaccessible part of the Serverless architecture, providing developers with technical dividends to reduce costs and improve efficiency.
Admittedly, the CNCF Cloud Native Foundation affirmed in the Serverless white paper that Serverless is a combination of FaaS and BaaS; While UC Berkeley affirmed this statement in its paper, it also pointed out from the perspective of features that products or services that are considered to be Serverless architecture also need to have the features of pay as you go and elastic scaling, but this is also the "description" of 2019.
To date, the Serverless architecture has completed "self updating and iteration". In the white paper of Serverless released by the ICT Academy, it is clearly pointed out that the Serverless architecture computing platform includes two forms: function dimension and application dimension. With the development of time, Alibaba Cloud has pioneered the introduction of the Serverless application engine (SAE), which is an application oriented platform. In other words, it can actually be the best practice of serverless application.
So far, the composition of the Serverless architecture has become clear:
• From the perspective of structure, Serverless is a combination of computing platform and BaaS products. The computing platform includes event triggered function calculation, as well as serverless application engine, which is the best practice of serverless application. The BaaS layer includes Api network management, CDN, object storage, database and other cloud services.
• From the perspective of features, as UC Berkeley said, for products or services that are considered to be Serverless architecture, pay as you go and elastic scalability are also required.
Differences between Serverless architecture and traditional architecture
As a new computing paradigm in the cloud era, the Serverless architecture itself belongs to a natural distributed architecture. Its working principle is slightly different from the traditional architecture, although it has no earthshaking changes.
As shown in the figure, in the traditional architecture, after developers have developed applications, they also need to purchase virtual machine services, initial operating environment, install required software (such as MySQL and other database software, Nginx and other server software). After the environment is ready, they also need to upload the developed business code to start the application. At this time, users can successfully access the target application through network requests. However, if the number of application requests is too large or too small, the developer or operation and maintenance personnel also need to expand or shrink the relevant resources according to the actual number of requests, and add corresponding strategies in the load balancing reverse proxy module to ensure that the expansion and reduction operations take effect in a timely manner; At the same time, online users should not be affected when doing these operations.
Under the Serverless architecture, the whole application publishing process and working principle will change to some extent:
After developers develop complete business code, they only need to deploy or update it to the corresponding FaaS platform. After that, they can configure relevant triggers according to the actual business needs. For example, HTTP triggers can be configured to provide external Web application services. At this time, users can access the applications published by developers through the network. In this process, developers do not need to pay extra attention to server purchase, operation and maintenance and other related operations, nor do they need to pay extra attention to the installation of some software and the expansion and contraction of application resources; Developers only need to pay attention to their own business logic. As for the various server software that need to be installed and configured under the traditional architecture, they have turned into configuration items for cloud manufacturers to manage; Similarly, the expansion and contraction of resources under the traditional architecture need to be carried out according to the utilization of servers, which are all automatically handed over to the cloud vendor.
In the traditional sense, elastic scaling means that when there is a conflict between the capacity planning of the project and the actual cluster load, that is, when the resources of the existing cluster cannot bear the pressure, the stability of the business can be guaranteed by adjusting the size of the cluster or further allocating the corresponding resources. When the cluster load is low, the system can reduce the resource allocation of the cluster as much as possible to reduce the waste of idle resources, so as to further save costs. However, under the Serverless architecture, elastic scaling is further generalized, that is, the performance on the user side cancels the capacity planning process of the project itself, and the increase and decrease of resources are completely determined by the platform scheduling.
In UC Berkeley's article, the characteristics and advantages of Serverless architecture are described, There is such an expression: "It is no longer necessary to allocate resources manually for code execution. It is not necessary to specify the required resources for service operation (such as how many machines, how much bandwidth, how much disk, etc.) , you only need to provide a code, and the rest will be handled by the Serverless platform. In the current stage, when the implementation platform allocates resources, the user also needs to provide some strategies, such as the specification and maximum concurrency of a single instance, and the maximum CPU utilization of a single instance. The ideal situation is to use some learning algorithms to perform fully automatic adaptive allocation. "In fact, the" fully automatic adaptive allocation "described by the author here refers to the elastic scalability of the Serverless architecture.
Elastic scaling under the Serverless architecture means that the Serverless architecture can automatically allocate and destroy resources according to business flow fluctuations, and maximize stability, high performance, and improve resource utilization. That is, after the developer completes the development of business logic and deploys the business code to the Serverless platform, the platform usually does not immediately allocate computing resources, but persists the business code, configuration and other related content. When the traffic request arrives, the Serverless platform will automatically start the instance based on the actual traffic and configuration, and vice versa, Even in some cases, the number of instances can be reduced to 0, that is, the platform does not allocate resources to corresponding functions.
The core technology bonus of Serverless architecture: elastic scalability, to a certain extent, also represents the process of improving resource utilization and moving towards green computing.
In the elastic scaling part of the above figure, on the left is the diagram of traffic and machine load under the traditional virtual machine architecture, and on the right is the diagram of traffic and load under the elastic mode of the Serverless architecture. In these two figures, the orange area represents the resource load capability perceived by the user side, and the blue line represents the traffic trend of a website on a certain day. Through the comparison of these two figures, it is not difficult to find that under the traditional virtual machine architecture, resources need to be increased and reduced manually. The granularity of change is the host level, so the implementation is seriously tested. If the granularity is too coarse, it is still unable to effectively balance the relationship between resource waste and performance stability.
In the figure, the orange area above the blue line is a wasted resource; On the right is the diagram of traffic and load under the elastic mode of the Serverless architecture. In this diagram, it can be clearly seen that the load capacity always matches the traffic, that is, unlike the traditional virtual machine architecture on the left, it is not necessary to deal with the peaks and valleys of traffic under the manual intervention of technicians; All the flexibility (including capacity expansion and capacity reduction) is provided by cloud manufacturers; On the one hand, the benefits brought by this mode are to reduce the pressure on business operation and maintenance personnel and reduce their work complexity. On the other hand, it can be seen from the user's perception that the real resource consumption is positively related to the required resource consumption, which can greatly reduce the waste of resources, and to a certain extent, it also conforms to the green computing idea.
The so-called pay as you go billing method is a pay as you go billing method. Through pay as you go billing, users do not need to buy a lot of resources in advance, but can pay as you use them. Even non Serverless products or services have a certain pay as you go capability. For example, virtual machines and other products have the option of pay as you go. However, the reason that the Serverless architecture can take pay as you go as a technical bonus is that the granularity of pay as you go is more detailed, and the resource utilization on the user side is nearly 100% (in fact, the resource utilization does not reach 100%, which only refers to the perception of the user side under the Serverless architecture under the request granularity).
Take a website as an example. In the daytime, the resource utilization rate is relatively high, and in the nighttime, the resource utilization rate is relatively low. However, once the server and other resources are purchased, in fact, no matter how much traffic is on that day, the cost is a continuous expenditure process; Even if the pay as you go model is adopted, the billing granularity is too thick, and it is impossible to maximize the resource utilization. According to the statistics of Forbes magazine, typical servers in commercial and enterprise data centers only provide 5%~15% of the output of the average maximum processing capacity, which undoubtedly proves the low resource utilization rate and excessive waste of traditional servers.
With the advent of Serverless architecture, users can entrust service providers to manage servers, databases, applications and even logic. On the one hand, this approach reduces the trouble of users' own maintenance, and on the other hand, users can pay costs according to the granularity of their actual functions. For service providers, they can do additional processing on more idle resources, which is very good from the perspective of cost and "green" computing. On the other hand, although the pay as you go model of the Serverless architecture also charges based on resource usage, the billing granularity is more detailed:
• From the perspective of the number of requests: the billing granularity of the Serverless architecture is at the request level, while the billing granularity of traditional virtual machine and other architectures is at the instance level (often the number of requests supported by this instance level is far greater than 1);
• Billing time perspective: The billing time of the Serverless architecture is usually at the second level (but now Alibaba Cloud supports millisecond or hundred millisecond billing). For the traditional virtual machine architecture, the billing time granularity is usually at the hour level;
The volume part in the above figure is the traffic graph of the website on a certain day, The blue broken line in the figure shows the traffic trend of a website on a certain day: through the "Schematic Diagram of Traffic and Expense of Traditional Virtual Machine Architecture" and "Schematic Diagram of Expense under the Elastic Model of Serverless Architecture" "By comparison, it is not difficult to find that in the diagram of traffic and expense under the traditional virtual machine architecture on the left, the business usually needs to conduct resource usage evaluation before going online. After the resource usage evaluation of the website, a server that can withstand a maximum of 1300PV per hour is purchased, so the total amount of computing power provided by this server is orange area, and the cost is also orange area in a whole day The cost of corresponding computing power; However, we can clearly see that the really effective resource use and cost expenditure is only the area below the flow curve, while the orange area above the flow curve is the resource loss and additional expenditure; On the right, in the diagram of expense under the elastic mode of the Serverless architecture, the expense is basically proportional to the traffic, that is, when the traffic is at a low level, the corresponding resource usage is relatively small, and the corresponding expense is also relatively small; When the traffic is at a relatively high value, with the help of the elastic scalability and pay as you go capabilities of the Serverless architecture, the resource usage and expense will increase in a positive correlation; In the whole process, it can be clearly seen that there is no obvious waste of resources and additional cost expenditure as shown in the flow and expense diagram under the traditional virtual machine architecture on the left.
In video applications, social applications and other scenarios, users often upload large amounts of pictures, audio and video with high frequency, which requires high real-time and concurrency of the processing system. For example, for images uploaded by users, multiple functions can be used to process them separately, including image compression, format conversion, yellow and terror detection, etc., to meet the needs of different scenarios. For example:
In addition, Serverless can also play a role in real-time file processing, real-time stream processing, machine learning, IOT backend, mobile application backend, web applications and other scenarios:
The world's leading Serverless platform
Alibaba Cloud is an early group of domestic manufacturers that provide serverless services. In the past few years, Alibaba Cloud has also made outstanding achievements in practice. Including but not limited to, Alibaba Cloud's Serverless product capability ranks first in China in the Q1, 2021 Forrester evaluation; In the CNCF 2020 cloud native research report, Alibaba Cloud Serverless has the largest market share in China; Alibaba Cloud Serverless users also ranked first in China in the survey report of China Cloud native users in 2020 by the ICT Institute.
The reason why Alibaba Cloud Serverless products and services can be recognized by developers is that Alibaba Cloud Serverless has a user centered attitude on the basis of its secure architecture and leading technology.
Alibaba Cloud Serverless product layout
From the above figure, we can see that the bottom layer is the computing platform and BaaS product layer. In the computing product part, there are event driven - functional computing FC, and also the best practice of serverless application - serverless application engine SAE; In the BaaS service linkage part, there are services, databases, networks, messages, etc. at different levels, and these products are also becoming serverless, from cloud to cloud native to serverless; There are developer tools and application centers on the upper layer, which provide a series of All On Serverless solutions and scenarios for developers, such as front-end integration, Web API, database processing, AI reasoning, etc.
Make Serverless simpler: Serverless Devs
At the ecological level, the Alibaba Cloud Serverless team has opened up a vendor free locking tool, Serverless Devs, which is adhering to the attitude of making Serverless simpler and can play a role in the whole life cycle of Serverless applications. Serverless Devs not only promotes the Academy to release the Serverless tool chain model together on the underlying specification model, but also donates projects to CNCF Sandbox on the tool level, It became the world's first Sandbox project in CNCF Serverless Tools.
To sum up, we are serious and professional in making Serverless architecture. We want to be a moving product, a good tool, a responsible work, and a practical and innovative content. Thank you for your attention to Alibaba Cloud Serverless architecture.
Hello, I'm Liu Yu, product manager of Alibaba Cloud Serverless. I'm glad to explore the past and present of the Serverless architecture with you.
From cloud computing to cloud native to Serverless architecture, there are certain rules to follow in the rapid development of technology. So why did Serverless architecture come into being?
The birth of cloud computing
Since the world's first general-purpose computer ENIAC, the development of computer science and technology has never stopped moving forward. In recent years, it has changed with each passing day. There are artificial intelligence fields that are constantly breaking through and innovating, Internet of Things fields where 5G brings more opportunities, and cloud computing that is constantly entering ordinary people's homes.
Three key words can be seen in the figure. This is from 2003 to 2006, Google published three important papers, which pointed out the technical foundation and future opportunities of HDFS (Distributed File System), MapReduce (Parallel Computing) and Hbase (Distributed Database), and formally laid the development direction of cloud computing. Regarding these three articles, or these three technical points, someone once said that "because of them, cloud computing has officially started".
The development of cloud computing is rapid and obvious to all; However, with the progress of cloud computing, another term was born and quickly occupied the "Tuyere Banner", which was more widely concerned by the public, that is, cloud native.
Through the analysis of the text composition structure of cloud computing and cloud native, we can see that cloud native is actually an addition of a native between cloud and computing. Therefore, we can think that the rapid development of cloud computing, whether from technology iteration or concept upgrading, has finally produced the familiar cloud native computing.
What is cloud computing? In fact, the embryonic concept of cloud computing was born as early as 1961. At the centennial ceremony of MIT, John McCarthy, the winner of the Turing Prize in 1971, put forward a concept for the first time. This concept was later described as the "initial and advanced" daydream model of cloud computing. In the future, computers will become a public resource, which will be used by everyone just like water, electricity and gas in life In 1996, the term cloud computing was formally proposed; In 2009, UC Berkeley (University of California, Berkeley) published a more detailed description of cloud computing in its paper. It said that cloud computing is an ancient dream to be realized and a new name for computing as an infrastructure, which has long been a dream. It is rapidly becoming a commercial reality. At the same time, this article also clearly defines cloud computing: cloud computing includes application services on the Internet, as well as software and hardware facilities that provide these services in the data center.
The fierceness of cloud origin
Today, cloud native technology is also developing rapidly, so what is cloud native? In the article "What is a real cloud native", a very clear explanation is given: software, hardware and architecture generated by the cloud are real cloud native; The technology generated by the cloud is the cloud native technology. Indeed, being born in the cloud, growing up in the cloud, and being born of the cloud is the cloud native.
What does Cloud Native include? The familiar technologies, together with the three words "cloud native", are all cloud native related technologies, such as: database, cloud native database; Network, cloud native network, etc. In CNCF Landscape, you can see a description of cloud native product dimensions by Cloud Native Foundation, including database, stream, message, container image, servicemesh, gateway, K8S, and of course, a very popular term: Serverless.
The appearance of Serverless architecture
In many cases, the Serverless architecture is called a kind of adhesive. It links many other cloud native products with users' businesses, and at the same time provides extremely attractive technical dividends. For this reason, it is also selected by many projects and businesses. What is the Serverless architecture?
Through the structure of Serverless, it is not difficult to find the mind to be transferred. Server refers to the server, and Less means less energy. Therefore, the mind transferred by the Serverless architecture is to give more professional things to more professional people, so that developers can pay less attention to the underlying related content such as servers, and put more energy on more valuable business logic.
In 2009, UC Berkeley published an article on cloud computing. In the article, UC Berkeley made a clear definition of cloud computing, and also proposed various difficulties and challenges faced by ten cloud computing, including service availability, data security and auditability, and asserted that cloud computing will lead the next decade.
In 2019, exactly ten years later, UC Berkeley again issued a document, which not only explained what Serverless architecture is from multiple perspectives, for example, from the structural perspective, it affirmed that Serverless is the combination of FaaS and BaaS; From the feature point of view, products or services that are considered to be Serverless architecture should have the characteristics of pay as you go and elastic scalability; It is also very radical to say that Serverless will become the default computing paradigm in the cloud era and will replace Serverful computing, which also means the end of the server client model.
From IaaS to PaaS to Serverless, the development of cloud computing is becoming more and more clear, and the de serveralization is also becoming more and more obvious.
Whether we are talking about cloud native or Serverless architecture at this time, the concept of cloud is indeed constantly upgrading, and cloud technology is also constantly iterative. All these changes are actually for efficiency improvement, security improvement, cost reduction, and productivity driven.
What is the Serverless architecture?
Although there is no clear definition of Serverless architecture, many people accept that Serverless is a combination of FaaS and BaaS. The so-called FaaS is a function as a service, while BaaS refers to the back-end as a service. Together, the two become an inaccessible part of the Serverless architecture, providing developers with technical dividends to reduce costs and improve efficiency.
Admittedly, the CNCF Cloud Native Foundation affirmed in the Serverless white paper that Serverless is a combination of FaaS and BaaS; While UC Berkeley affirmed this statement in its paper, it also pointed out from the perspective of features that products or services that are considered to be Serverless architecture also need to have the features of pay as you go and elastic scaling, but this is also the "description" of 2019.
To date, the Serverless architecture has completed "self updating and iteration". In the white paper of Serverless released by the ICT Academy, it is clearly pointed out that the Serverless architecture computing platform includes two forms: function dimension and application dimension. With the development of time, Alibaba Cloud has pioneered the introduction of the Serverless application engine (SAE), which is an application oriented platform. In other words, it can actually be the best practice of serverless application.
So far, the composition of the Serverless architecture has become clear:
• From the perspective of structure, Serverless is a combination of computing platform and BaaS products. The computing platform includes event triggered function calculation, as well as serverless application engine, which is the best practice of serverless application. The BaaS layer includes Api network management, CDN, object storage, database and other cloud services.
• From the perspective of features, as UC Berkeley said, for products or services that are considered to be Serverless architecture, pay as you go and elastic scalability are also required.
Differences between Serverless architecture and traditional architecture
As a new computing paradigm in the cloud era, the Serverless architecture itself belongs to a natural distributed architecture. Its working principle is slightly different from the traditional architecture, although it has no earthshaking changes.
As shown in the figure, in the traditional architecture, after developers have developed applications, they also need to purchase virtual machine services, initial operating environment, install required software (such as MySQL and other database software, Nginx and other server software). After the environment is ready, they also need to upload the developed business code to start the application. At this time, users can successfully access the target application through network requests. However, if the number of application requests is too large or too small, the developer or operation and maintenance personnel also need to expand or shrink the relevant resources according to the actual number of requests, and add corresponding strategies in the load balancing reverse proxy module to ensure that the expansion and reduction operations take effect in a timely manner; At the same time, online users should not be affected when doing these operations.
Under the Serverless architecture, the whole application publishing process and working principle will change to some extent:
After developers develop complete business code, they only need to deploy or update it to the corresponding FaaS platform. After that, they can configure relevant triggers according to the actual business needs. For example, HTTP triggers can be configured to provide external Web application services. At this time, users can access the applications published by developers through the network. In this process, developers do not need to pay extra attention to server purchase, operation and maintenance and other related operations, nor do they need to pay extra attention to the installation of some software and the expansion and contraction of application resources; Developers only need to pay attention to their own business logic. As for the various server software that need to be installed and configured under the traditional architecture, they have turned into configuration items for cloud manufacturers to manage; Similarly, the expansion and contraction of resources under the traditional architecture need to be carried out according to the utilization of servers, which are all automatically handed over to the cloud vendor.
In the traditional sense, elastic scaling means that when there is a conflict between the capacity planning of the project and the actual cluster load, that is, when the resources of the existing cluster cannot bear the pressure, the stability of the business can be guaranteed by adjusting the size of the cluster or further allocating the corresponding resources. When the cluster load is low, the system can reduce the resource allocation of the cluster as much as possible to reduce the waste of idle resources, so as to further save costs. However, under the Serverless architecture, elastic scaling is further generalized, that is, the performance on the user side cancels the capacity planning process of the project itself, and the increase and decrease of resources are completely determined by the platform scheduling.
In UC Berkeley's article, the characteristics and advantages of Serverless architecture are described, There is such an expression: "It is no longer necessary to allocate resources manually for code execution. It is not necessary to specify the required resources for service operation (such as how many machines, how much bandwidth, how much disk, etc.) , you only need to provide a code, and the rest will be handled by the Serverless platform. In the current stage, when the implementation platform allocates resources, the user also needs to provide some strategies, such as the specification and maximum concurrency of a single instance, and the maximum CPU utilization of a single instance. The ideal situation is to use some learning algorithms to perform fully automatic adaptive allocation. "In fact, the" fully automatic adaptive allocation "described by the author here refers to the elastic scalability of the Serverless architecture.
Elastic scaling under the Serverless architecture means that the Serverless architecture can automatically allocate and destroy resources according to business flow fluctuations, and maximize stability, high performance, and improve resource utilization. That is, after the developer completes the development of business logic and deploys the business code to the Serverless platform, the platform usually does not immediately allocate computing resources, but persists the business code, configuration and other related content. When the traffic request arrives, the Serverless platform will automatically start the instance based on the actual traffic and configuration, and vice versa, Even in some cases, the number of instances can be reduced to 0, that is, the platform does not allocate resources to corresponding functions.
The core technology bonus of Serverless architecture: elastic scalability, to a certain extent, also represents the process of improving resource utilization and moving towards green computing.
In the elastic scaling part of the above figure, on the left is the diagram of traffic and machine load under the traditional virtual machine architecture, and on the right is the diagram of traffic and load under the elastic mode of the Serverless architecture. In these two figures, the orange area represents the resource load capability perceived by the user side, and the blue line represents the traffic trend of a website on a certain day. Through the comparison of these two figures, it is not difficult to find that under the traditional virtual machine architecture, resources need to be increased and reduced manually. The granularity of change is the host level, so the implementation is seriously tested. If the granularity is too coarse, it is still unable to effectively balance the relationship between resource waste and performance stability.
In the figure, the orange area above the blue line is a wasted resource; On the right is the diagram of traffic and load under the elastic mode of the Serverless architecture. In this diagram, it can be clearly seen that the load capacity always matches the traffic, that is, unlike the traditional virtual machine architecture on the left, it is not necessary to deal with the peaks and valleys of traffic under the manual intervention of technicians; All the flexibility (including capacity expansion and capacity reduction) is provided by cloud manufacturers; On the one hand, the benefits brought by this mode are to reduce the pressure on business operation and maintenance personnel and reduce their work complexity. On the other hand, it can be seen from the user's perception that the real resource consumption is positively related to the required resource consumption, which can greatly reduce the waste of resources, and to a certain extent, it also conforms to the green computing idea.
The so-called pay as you go billing method is a pay as you go billing method. Through pay as you go billing, users do not need to buy a lot of resources in advance, but can pay as you use them. Even non Serverless products or services have a certain pay as you go capability. For example, virtual machines and other products have the option of pay as you go. However, the reason that the Serverless architecture can take pay as you go as a technical bonus is that the granularity of pay as you go is more detailed, and the resource utilization on the user side is nearly 100% (in fact, the resource utilization does not reach 100%, which only refers to the perception of the user side under the Serverless architecture under the request granularity).
Take a website as an example. In the daytime, the resource utilization rate is relatively high, and in the nighttime, the resource utilization rate is relatively low. However, once the server and other resources are purchased, in fact, no matter how much traffic is on that day, the cost is a continuous expenditure process; Even if the pay as you go model is adopted, the billing granularity is too thick, and it is impossible to maximize the resource utilization. According to the statistics of Forbes magazine, typical servers in commercial and enterprise data centers only provide 5%~15% of the output of the average maximum processing capacity, which undoubtedly proves the low resource utilization rate and excessive waste of traditional servers.
With the advent of Serverless architecture, users can entrust service providers to manage servers, databases, applications and even logic. On the one hand, this approach reduces the trouble of users' own maintenance, and on the other hand, users can pay costs according to the granularity of their actual functions. For service providers, they can do additional processing on more idle resources, which is very good from the perspective of cost and "green" computing. On the other hand, although the pay as you go model of the Serverless architecture also charges based on resource usage, the billing granularity is more detailed:
• From the perspective of the number of requests: the billing granularity of the Serverless architecture is at the request level, while the billing granularity of traditional virtual machine and other architectures is at the instance level (often the number of requests supported by this instance level is far greater than 1);
• Billing time perspective: The billing time of the Serverless architecture is usually at the second level (but now Alibaba Cloud supports millisecond or hundred millisecond billing). For the traditional virtual machine architecture, the billing time granularity is usually at the hour level;
The volume part in the above figure is the traffic graph of the website on a certain day, The blue broken line in the figure shows the traffic trend of a website on a certain day: through the "Schematic Diagram of Traffic and Expense of Traditional Virtual Machine Architecture" and "Schematic Diagram of Expense under the Elastic Model of Serverless Architecture" "By comparison, it is not difficult to find that in the diagram of traffic and expense under the traditional virtual machine architecture on the left, the business usually needs to conduct resource usage evaluation before going online. After the resource usage evaluation of the website, a server that can withstand a maximum of 1300PV per hour is purchased, so the total amount of computing power provided by this server is orange area, and the cost is also orange area in a whole day The cost of corresponding computing power; However, we can clearly see that the really effective resource use and cost expenditure is only the area below the flow curve, while the orange area above the flow curve is the resource loss and additional expenditure; On the right, in the diagram of expense under the elastic mode of the Serverless architecture, the expense is basically proportional to the traffic, that is, when the traffic is at a low level, the corresponding resource usage is relatively small, and the corresponding expense is also relatively small; When the traffic is at a relatively high value, with the help of the elastic scalability and pay as you go capabilities of the Serverless architecture, the resource usage and expense will increase in a positive correlation; In the whole process, it can be clearly seen that there is no obvious waste of resources and additional cost expenditure as shown in the flow and expense diagram under the traditional virtual machine architecture on the left.
In video applications, social applications and other scenarios, users often upload large amounts of pictures, audio and video with high frequency, which requires high real-time and concurrency of the processing system. For example, for images uploaded by users, multiple functions can be used to process them separately, including image compression, format conversion, yellow and terror detection, etc., to meet the needs of different scenarios. For example:
In addition, Serverless can also play a role in real-time file processing, real-time stream processing, machine learning, IOT backend, mobile application backend, web applications and other scenarios:
The world's leading Serverless platform
Alibaba Cloud is an early group of domestic manufacturers that provide serverless services. In the past few years, Alibaba Cloud has also made outstanding achievements in practice. Including but not limited to, Alibaba Cloud's Serverless product capability ranks first in China in the Q1, 2021 Forrester evaluation; In the CNCF 2020 cloud native research report, Alibaba Cloud Serverless has the largest market share in China; Alibaba Cloud Serverless users also ranked first in China in the survey report of China Cloud native users in 2020 by the ICT Institute.
The reason why Alibaba Cloud Serverless products and services can be recognized by developers is that Alibaba Cloud Serverless has a user centered attitude on the basis of its secure architecture and leading technology.
Alibaba Cloud Serverless product layout
From the above figure, we can see that the bottom layer is the computing platform and BaaS product layer. In the computing product part, there are event driven - functional computing FC, and also the best practice of serverless application - serverless application engine SAE; In the BaaS service linkage part, there are services, databases, networks, messages, etc. at different levels, and these products are also becoming serverless, from cloud to cloud native to serverless; There are developer tools and application centers on the upper layer, which provide a series of All On Serverless solutions and scenarios for developers, such as front-end integration, Web API, database processing, AI reasoning, etc.
Make Serverless simpler: Serverless Devs
At the ecological level, the Alibaba Cloud Serverless team has opened up a vendor free locking tool, Serverless Devs, which is adhering to the attitude of making Serverless simpler and can play a role in the whole life cycle of Serverless applications. Serverless Devs not only promotes the Academy to release the Serverless tool chain model together on the underlying specification model, but also donates projects to CNCF Sandbox on the tool level, It became the world's first Sandbox project in CNCF Serverless Tools.
To sum up, we are serious and professional in making Serverless architecture. We want to be a moving product, a good tool, a responsible work, and a practical and innovative content. Thank you for your attention to Alibaba Cloud Serverless architecture.
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00