How can enterprises use Serverless to rapidly expand business systems?
From technology upgrading to cost reduction and efficiency improvement
Hello, everyone. I'm very glad to share today's Serverless topic with you.
In the previous explanation, I saw that many students also came to the scene today to have this opportunity to learn about Serverless. As the R&D director of Serverless event driven ecosystem, asynchronous system and Serverless workflow, I hope that today's sharing can help you understand the technical principles behind Serverless and how Serverless can help enterprises achieve the goal of cost reduction and efficiency improvement.
At the same time, I will also share some best practice guidance on how to apply Serverless for technology upgrading, architecture transformation, and cost reduction and efficiency improvement when enterprises are still in the transition stage from containerization to Serverless. Finally, we will share some serverless customer cases that have been used in actual production to help you understand how serverless is used in the actual production process and solve the business pain points.
Core driving force and pain points of enterprise technology upgrading
Core driving force for enterprise technology upgrading
First, let's understand the three core drivers of technology upgrading in the production process of enterprises:
• The first is the driving force of the contradiction between rapid business growth and insufficient IT capabilities. When an emerging business comes, due to the unpredictability of the business, it is actually difficult for us to predict and plan the business in advance and make basic preparations at the IT level. Enterprises need to have matching IT capabilities to support rapid business growth in a very short time.
• The second is to improve efficiency through research and development. The R&D efficiency can be improved through technical means, or the purpose can be achieved through personnel optimization.
• The third is the demand of enterprise IT cost optimization. Whether in a relatively early stage of development or a stage of stable business growth, in order to survive, or to achieve balance of payments, enterprises will attach great importance to cost, and seek to upgrade technology to achieve this goal driven by the demand for cost reduction.
Pain of enterprise application development
This article, "The Past and Present Lives of Serverless", gives you a good foreshadowing to help you understand why Serverless came into being? What is the core problem it wants to solve? Back to the core goal of enterprise development: realize business logic faster, reduce the development time on environment building and system connection, and focus more time on business development.
After the development is completed, you need a running environment to deploy the developed business code to provide services, including the related maintenance work involved in the running process, which is commonly referred to as operation and maintenance. In the whole process (that is, what we often call DevOps), the pain points faced by everyone, I think all the students of R&D, operation and maintenance have a clear sense of body, which, in summary, is the problem of enterprise R&D efficiency.
In addition to R&D efficiency, another very important thing for enterprises is the issue of R&D costs. Of course, here we only discuss the IT cost in enterprise research and development. The ideal model is, of course, to pay only for those calculations that really generate business value. But usually, the calculations that really generate business value are consistent with the life cycle of the business request. Before the arrival of the real business request, or during the request interval, we still need to pay for the held computing resources at these times, Although these time computing resources are idle for the business, this is also the demand that Serverless hopes to realize pay as you request to reduce costs for customers.
To help you better understand pay as you request, here is a K8s or ECS charging model. After you purchase K8s, you have to pay for them. After you create a Pod, the cluster allocates resources to you. When your request traffic does not come, you still need to pay for the Pod resources;
Serverless means that today you have provided code and deployed it on the platform. Your code package and your container image have been deployed on the Serverless platform. In fact, they may be warming up or running, or in a Standby state. However, there will be no computing costs during the period before your request for truth comes, or during the interval between two request calls.
Pain of enterprise business system research and development
For enterprises, what we usually call applications is not just a program in a simple sense, but an information system that carries the business capabilities of the entire enterprise.
When we want to build a business system, we usually go through several stages of system architecture selection, the first is the selection of technical architecture. Many enterprise systems are not built from scratch, but evolve iteratively in the constant business precipitation. But when you face new business systems, and at the same time, you also need to choose what kind of architecture and open source framework for those business systems that have to be reconstructed, you need to consider both the scalability of the architecture and the later maintenance of the framework, community maturity, and technology acquisition threshold, And subsequent business development talent recruitment.
Self built systems, especially in the Internet environment, the operation and maintenance burden and stability challenges brought by the distributed system will overwhelm the R&D team, and the technical integration will be difficult, which brings great challenges to the enterprise's self built distributed business system.
Pain of distributed system development
The main components of a typical distributed system need to consider a series of issues related to load balancing, flow control, resource scheduling, system observability, system stability, high availability requirements, and service governance. Continuous R&D investment and operation and maintenance burden have become the main pain points of self research.
Serverless function calculation helps upgrade technology to cost reduction and efficiency improvement
In the face of all these demands and challenges, before discussing how Serverless technology can meet and solve these demands and problems at the product level, let's review what was Serverless's original intention? As a cutting-edge technology field of cloud computing, Serverless aims to achieve "extreme elasticity, server free operation and maintenance, and pay as needed" at the beginning. From the perspective of this goal, Serverless started to solve the cost and efficiency problems we faced from the perspective of technology upgrading, that is, the saying "If you go the right way, you can't go too far".
Core objective of function calculation
"Pay as you need, no server operation and maintenance, and extreme flexibility". These three concepts have found a good balance between the customer perspective and technical terms. Whether it is the R&D personnel responsible for Serverless technology or the decision-makers responsible for enterprise technology upgrading, they can directly understand the value that Serverless wants to realize. Centering on these three concepts, we need to achieve two core goals: efficiency improvement goal and cost optimization goal.
Pay on demand is more from a business perspective. It is easy to understand that pay on demand is based on requests. Server free O&M, from the perspective of O&M or R&D, means that you don't want to spend more time on server purchase; In terms of operation and maintenance, it includes a series of operations and maintenance, such as elastic expansion of resources and health check. Carrying these two things requires a basic product capability. The simplest is extreme elasticity, which means I can use them when I need them and recycle them when I don't need them. Only in this way can I support the pay as you go logic.
For real business value request billing, reduce user costs, and reduce the retention cost of customer resources through flexibility. In fact, the smaller the effort, the shorter the retention time, and the closer to the real computing time, so as to reduce costs. Efficiency goals must first be developed in a simple way. If they are complex, it is difficult for R&D personnel to accept them, and they will not play a role in increasing efficiency. In addition, on the basis of simple development, rapid deployment reduces the time for R&D personnel to participate in the release and expansion, which is the so-called goal of cost and efficiency.
Function calculation programming mode makes [application development] simpler
After understanding the basic goals of Serverless, we need to discuss how Serverless function calculation can achieve these two goals. First, we must evaluate the goal that function computing can achieve from the programming mode of function computing, which is an event driven fully managed computing service.
Using function calculation, users do not need to purchase and manage infrastructure such as servers, but only write and upload code. Function computing prepares computing resources for you, runs tasks flexibly and reliably, and provides log query, performance monitoring, alarm and other functions.
Develop independent function units according to function granularity, debug quickly, deploy and launch quickly, saving a lot of resource purchase and operation and maintenance of environment construction; At the same time, function computing is an event driven model. Event driven means that users do not need to pay attention to the problem of service product data transmission, which saves us a lot of service access link logic involved in coding; Several dimensional features, such as "event driven"+"function granularity development"+"server free operation and maintenance", help function computing support to focus more on the underlying logic of business logic development, achieve real technology upgrading and improve R&D efficiency.
Function calculation programming mode makes [application running cost] lower
In addition to the improvement of R&D efficiency brought by the development mode, let's look at how function computing can implement the underlying logic that helps customers reduce costs. According to the user's request, paying according to the model of user traffic is the most ideal state. However, paying according to the user's request has a huge technical challenge. It requires that the startup of the function computing instance is less than the user's RT requirements, and the cold start performance is particularly important. At this time, extreme elasticity becomes the underlying technical support for Serverless to pay on demand and reduce business costs. Function computing helps Serverless function computing realize the real underlying cost reduction logic through the "extreme elasticity"+"pay as needed" model.
Function to calculate the out of the box atomization capability of products
Whether for cloud developers or enterprise customers trying to upgrade their business, the three concepts of Serverless, "pay as you go, no server operation and maintenance, and extreme flexibility", have almost become popular. But what and how Serverless can do is still the most common voice around us.
At the initial stage of R&D of Serverless, the technical team usually focuses more on flexibility and cold start acceleration, hoping to highlight the technical competitiveness of the product through flexibility, establish the leading position of the product in the market, and rely on these capabilities to implement the technical goal of extreme flexibility of Serverless, so as to attract developers and enterprise customers to use Serverless. At this stage, we rely more on technical influence to guide us to explore Serverless.
With the deepening of our understanding of Serverless and the improvement of its flexibility, we will think more about other values beyond the flexibility of Serverless when customers need to use it in the production environment without essential changes. At this time, elastic coverage will no longer be limited to computing resources, but also include network, storage and other related resources. Elasticity, as the basic capability of the system, will permeate all aspects of the product. It is necessary to consider more from a systematic perspective what Serverless can do, what it can bring to customers, and how to focus the business on those parts that have to be customized.
Before answering what Serverless can do and how to do it, let's first understand what Serverless can do and what out of the box capabilities it can provide.
Function computing - connector of cloud products
As a computing platform, Serverless Function Computing (FC) is not an isolated island. Only when it is linked with other products of the entire cloud computing ecosystem to form a distributed cloud development environment, can it maximize the value of function computing and meet the needs of enterprise customers to build business systems based on it.
The greatest value of linkage is to solve the connection problem of the services behind the cloud products. It is also the foundation of the event driven architecture of Serverless function computing. The value of event driven is to help users understand the hidden call logic with a more intuitive way. These calls do not need to be reflected in the user's business logic. The connection also means the dependency between systems. This dependency will eventually be reflected through the degree of coupling. Coupling cannot indicate the strength of functional dependency. It is more reflected in the implementation level of software architecture, which is the result of implementation. This is also the "high cohesion, low coupling" implementation requirement emphasized in the software architecture field.
Event driven meets this implementation requirement in an event driven way. Based on this architecture, the internal implementation of software is no longer like the previous typical single application or traditional microservice implementation, and it needs to rely heavily on integrating multiple service dependent clients into its own business system.
In the cloud based development environment, the services carried by cloud products are relatively cohesive and play an important role in the cloud native architecture. The event notification mechanism between cloud products can help customers better build their own cloud native business system based on multiple cloud products. Otherwise, the watch event between cloud products is very complex and expensive. In addition to the development efficiency brought by the product connection, when the user subscribes to an event and provides processing logic, the customer has potentially filtered out the event requests that do not need to be processed. Event driven means that each event request is actually an effective driver.
At present, function computing has built a complete event ecosystem through integration with multiple cloud products, including API gateway, message middleware MQ, object storage OSS, table storage Tablestore, log service SLS, CDN, big data datahub, cloud call, etc. At the same time, it also accesses the operation and maintenance events (log audit, cloud monitoring, product operation and maintenance) of Alibaba Cloud's entire cloud products through EventBridge, Help customers use function computing and many cloud products to form a cloud native business system based on event driven architecture.
Function computing -- efficient message ecological event driven model
With the characteristics of asynchronous decoupling and peak clipping, message products have become an essential part of the Internet distributed architecture. Serverless function computing has its own application scenarios. For the ecological integration of message products, function computing has been specially built at the architecture level. Based on the EventStreaming channel capacity provided by the EventBridge product, a general message consumption service, Poller Service, has been built, Based on this architecture, users are provided with RocketMQ, Kafka, RabbitMQ, MNS and other message type triggering capabilities.
Service the logic of consumption, separate the platform from the business logic, and separate the consumption logic from the processing logic. Convert the message pull model of traditional architecture to the event driven push model of Serverless, which can support the calculation logic of message processing carried by function computing, and achieve serverless message processing. Based on this architecture, it can help customers solve the problem of integrated connection of message clients, simplify the implementation of message processing logic, and achieve dynamic expansion of resources and reduce user costs for the peak and trough business model.
Function calculation - out of the box asynchronous task processing capability
The figure below shows the basic model of a typical asynchronous task processing system, which submits, schedules, executes, and finally delivers the execution results through API.
In the traditional task processing framework, the ability of task scheduling, load balancing and flow control strategy is usually built based on the service gateway, which is also the most basic, but most core, and most complex part in the construction of the distributed system, which requires the most manpower investment; The back-end implementation is usually based on the memory queue of process granularity and the thread pool model at the runtime level to complete the specific task dispatch and execution. The thread pool is usually closely related to the selected programming language Runtime, and the system architecture is highly dependent on the programming development language.
The process of the Serverless asynchronous task processing system is as follows: users distribute tasks through APIs. After the requests arrive at the Serverless service gateway, they are stored in the asynchronous request queue. Async Service will start to take over these requests, and then request scheduling to obtain back-end resources. These requests will be allocated to specific back-end resources for execution.
In this architecture diagram, Async Service is responsible for the implementation of request Dispatcher, load balancing, flow control strategy and resource scheduling in the traditional architecture. At this time, the function cluster is equivalent to an abstract distributed thread pool model. Under the function computing model, instances are isolated from each other, and resources have the ability to scale horizontally. With the function, the overall resource pool capacity is calculated, It can avoid the thread pool capacity problem and resource scheduling bottleneck problem caused by the single machine resource limitation under the traditional application architecture. At the same time, the task execution environment is not limited by the overall business system runtime, which is the advantage of Serverless asynchronous task system over traditional task systems.
From the perspective of the architecture of the Serverless task processing system, its processing logic is very simple. Most of the capabilities that the distributed system relies on are transparently implemented by the Async Service system role. For users, more of the task processing logic is provided through functional programming. The overall architecture avoids the dependency on the language based runtime thread pool, The whole function computing cluster provides an "unlimited" capacity "thread pool". Through the service mode, users only need to submit requests, and other concurrent processing, flow control and backlog processing are all completed by the Serverless platform. Of course, in the actual execution process, it is necessary to configure the concurrency of asynchronous task processing, error retry strategy and result delivery based on the business characteristics.
Function calculation - out of the box observational capability
After functions provide a lot of out of the box capabilities, the most urgent need for customers is to observe these out of the box capabilities: how to reveal some indicators and operation states that users care about in the face of customer development and debugging, business logic optimization, system stability, metering and billing and other needs?
We need to provide customers with an out of the box observational capability support. Especially in the initial transition stage when customers use Serverless, it is very necessary to show the black box information of the system to customers, and improve the transparency of product billing. Based on the current out of the box observation capability, we can clearly see the whole task processing process and consumed computing resources.
How can enterprises use FC to rapidly expand business systems?
Next, I will focus on how, in the actual business system, enterprises can use these atomization capabilities provided by function computing to rapidly expand their own business systems, so that Serverless can truly become a reliable dependency for enterprises to achieve business system extension and architecture upgrade.
Atomization capability of fast integration function calculation of enterprise system
We said at the beginning that we want to promote the business system All on Serverless, so that the entire business system of the enterprise can run on Serverless. But realistically, this goal is still very challenging at this stage. We are still in a relatively early transition stage. We hope to provide some suggestions on best practices to help the enterprise build on the existing system, We can use the atomization ability provided by the Serverless system to achieve the goal of cost reduction and efficiency increase, introduce the Serverless technology into our own business system, and experience the business value that Serverless can bring through continuous use of Serverless.
How to quickly integrate such capabilities with existing systems? Function computing provides SDK, HTTP URL and a variety of event driven access methods, and can use function computing to support VPC capabilities, opening up the network space of function computing and the customer's existing systems. At the same time, in terms of business support, the function provides a variety of dimensions of runtime capabilities, including the official standard runtime, support for user-defined Customer Runtime, and the customer container image deployment mode integrated with the container ecosystem, which minimizes the threshold for the customer business system to run on the Serverless platform. Compared with the container ecosystem, function computing provides a very significant image warm-up acceleration capability to help quickly start the customer's business system to provide external services.
Split the task/job class business processing logic
Split the task/job class business processing logic: In some microservice architecture business systems, the ability of Serverless asynchronous/asynchronous tasks is used to meet the system task processing requirements. Function calculation provides HTTP, SDK, timing, event triggering and other integration methods for users to submit requests and execute related tasks.
Split MQ business message processing logic
Split the MQ business message processing logic: There are usually many business subsystems linked by the message middleware in the enterprise business system. The function calculation provides the message cloud product event triggering capability. The original logic of listening to the message queue and actively pulling messages for consumption is replaced by Serverless triggers. The message processing logic is assumed by the function calculation, The event driver is used to decouple the message consumption and message processing, and the reliable consumption capability provided by the event driver is unified to achieve the serverless message processing logic.
Split file classes to process business logic
Split file class processing business logic: For some file and video processing businesses, data flows between the file system and the DB. Use OSS triggers and DB class triggers (OTS) provided by function computing to quickly complete the relevant data processing logic in an event driven manner.
Split the business logic of the data processing class
Split the business logic of the data processing class: It is hoped that some business logic of the data processing class can be processed by using the Serverless ETL capabilities provided by the message product, and the source end and target end can be rapidly expanded by using function computing according to the business needs.
Customer scenario case analysis
Next, I will share some actual user Serverless scenarios.
Algorithm task
The following is a typical business scenario architecture in the algorithm field: for some algorithm tasks, high-performance computing, AI related reasoning tasks, advertising image recognition, and intelligent operation and maintenance capabilities.
Usually, these parts belong to relatively independent parts of the business system. Using function calculation, an algorithm model for reasoning is rapidly deployed, and pictures or a group of parameters required for reasoning are referred to the function by request. If you want to introduce a layer of decoupling logic, you can use MQ, and then use MQ events to trigger task execution. The final results will be submitted to OSS, and the generated files can be further processed by using event drivers.
Consumer Electronics
In the consumer electronics field, the customer Serverless solution, IOT related video transmission, collected data from IOT for further analysis, and finally supported the client to consume these video data.
Mutual entertainment industry
The following is a Serverless scenario case of microblogging for image access and processing. It uses function calculation to achieve cold data access and personalized image processing.
Education industry
The following is the Serverless solution for the education industry. There is an obvious phenomenon in the education industry. There is a common demand for encoding rebroadcasting, live recording, and live content review. During the live broadcast, the legitimacy of the live content needs to be reviewed through frame cutting.
Entertainment industry
The following is the Serverless solution for the entertainment industry, which aims at dynamic frame cutting audit of movie content and slice transcoding. The figure below shows a technical solution of Pumpkin Movie, which uses function calculation to realize such a capability.
Game industry
The following is a typical application case of function computing in the game industry, covering data processing, battle settlement, game contracting and other scenarios in the game industry. Several leading customers in the game industry are already using such a combination of Serverless solutions to implement their own business systems. These scenarios are very special. Take the battle settlement as an example. It is not necessary to continuously calculate in real time during the task execution of your replica. Usually, you need to calculate only when a replica may be about to end, or when you need to fight during the task execution of the replica. This is a typical Serverless application scenario.
Hello, everyone. I'm very glad to share today's Serverless topic with you.
In the previous explanation, I saw that many students also came to the scene today to have this opportunity to learn about Serverless. As the R&D director of Serverless event driven ecosystem, asynchronous system and Serverless workflow, I hope that today's sharing can help you understand the technical principles behind Serverless and how Serverless can help enterprises achieve the goal of cost reduction and efficiency improvement.
At the same time, I will also share some best practice guidance on how to apply Serverless for technology upgrading, architecture transformation, and cost reduction and efficiency improvement when enterprises are still in the transition stage from containerization to Serverless. Finally, we will share some serverless customer cases that have been used in actual production to help you understand how serverless is used in the actual production process and solve the business pain points.
Core driving force and pain points of enterprise technology upgrading
Core driving force for enterprise technology upgrading
First, let's understand the three core drivers of technology upgrading in the production process of enterprises:
• The first is the driving force of the contradiction between rapid business growth and insufficient IT capabilities. When an emerging business comes, due to the unpredictability of the business, it is actually difficult for us to predict and plan the business in advance and make basic preparations at the IT level. Enterprises need to have matching IT capabilities to support rapid business growth in a very short time.
• The second is to improve efficiency through research and development. The R&D efficiency can be improved through technical means, or the purpose can be achieved through personnel optimization.
• The third is the demand of enterprise IT cost optimization. Whether in a relatively early stage of development or a stage of stable business growth, in order to survive, or to achieve balance of payments, enterprises will attach great importance to cost, and seek to upgrade technology to achieve this goal driven by the demand for cost reduction.
Pain of enterprise application development
This article, "The Past and Present Lives of Serverless", gives you a good foreshadowing to help you understand why Serverless came into being? What is the core problem it wants to solve? Back to the core goal of enterprise development: realize business logic faster, reduce the development time on environment building and system connection, and focus more time on business development.
After the development is completed, you need a running environment to deploy the developed business code to provide services, including the related maintenance work involved in the running process, which is commonly referred to as operation and maintenance. In the whole process (that is, what we often call DevOps), the pain points faced by everyone, I think all the students of R&D, operation and maintenance have a clear sense of body, which, in summary, is the problem of enterprise R&D efficiency.
In addition to R&D efficiency, another very important thing for enterprises is the issue of R&D costs. Of course, here we only discuss the IT cost in enterprise research and development. The ideal model is, of course, to pay only for those calculations that really generate business value. But usually, the calculations that really generate business value are consistent with the life cycle of the business request. Before the arrival of the real business request, or during the request interval, we still need to pay for the held computing resources at these times, Although these time computing resources are idle for the business, this is also the demand that Serverless hopes to realize pay as you request to reduce costs for customers.
To help you better understand pay as you request, here is a K8s or ECS charging model. After you purchase K8s, you have to pay for them. After you create a Pod, the cluster allocates resources to you. When your request traffic does not come, you still need to pay for the Pod resources;
Serverless means that today you have provided code and deployed it on the platform. Your code package and your container image have been deployed on the Serverless platform. In fact, they may be warming up or running, or in a Standby state. However, there will be no computing costs during the period before your request for truth comes, or during the interval between two request calls.
Pain of enterprise business system research and development
For enterprises, what we usually call applications is not just a program in a simple sense, but an information system that carries the business capabilities of the entire enterprise.
When we want to build a business system, we usually go through several stages of system architecture selection, the first is the selection of technical architecture. Many enterprise systems are not built from scratch, but evolve iteratively in the constant business precipitation. But when you face new business systems, and at the same time, you also need to choose what kind of architecture and open source framework for those business systems that have to be reconstructed, you need to consider both the scalability of the architecture and the later maintenance of the framework, community maturity, and technology acquisition threshold, And subsequent business development talent recruitment.
Self built systems, especially in the Internet environment, the operation and maintenance burden and stability challenges brought by the distributed system will overwhelm the R&D team, and the technical integration will be difficult, which brings great challenges to the enterprise's self built distributed business system.
Pain of distributed system development
The main components of a typical distributed system need to consider a series of issues related to load balancing, flow control, resource scheduling, system observability, system stability, high availability requirements, and service governance. Continuous R&D investment and operation and maintenance burden have become the main pain points of self research.
Serverless function calculation helps upgrade technology to cost reduction and efficiency improvement
In the face of all these demands and challenges, before discussing how Serverless technology can meet and solve these demands and problems at the product level, let's review what was Serverless's original intention? As a cutting-edge technology field of cloud computing, Serverless aims to achieve "extreme elasticity, server free operation and maintenance, and pay as needed" at the beginning. From the perspective of this goal, Serverless started to solve the cost and efficiency problems we faced from the perspective of technology upgrading, that is, the saying "If you go the right way, you can't go too far".
Core objective of function calculation
"Pay as you need, no server operation and maintenance, and extreme flexibility". These three concepts have found a good balance between the customer perspective and technical terms. Whether it is the R&D personnel responsible for Serverless technology or the decision-makers responsible for enterprise technology upgrading, they can directly understand the value that Serverless wants to realize. Centering on these three concepts, we need to achieve two core goals: efficiency improvement goal and cost optimization goal.
Pay on demand is more from a business perspective. It is easy to understand that pay on demand is based on requests. Server free O&M, from the perspective of O&M or R&D, means that you don't want to spend more time on server purchase; In terms of operation and maintenance, it includes a series of operations and maintenance, such as elastic expansion of resources and health check. Carrying these two things requires a basic product capability. The simplest is extreme elasticity, which means I can use them when I need them and recycle them when I don't need them. Only in this way can I support the pay as you go logic.
For real business value request billing, reduce user costs, and reduce the retention cost of customer resources through flexibility. In fact, the smaller the effort, the shorter the retention time, and the closer to the real computing time, so as to reduce costs. Efficiency goals must first be developed in a simple way. If they are complex, it is difficult for R&D personnel to accept them, and they will not play a role in increasing efficiency. In addition, on the basis of simple development, rapid deployment reduces the time for R&D personnel to participate in the release and expansion, which is the so-called goal of cost and efficiency.
Function calculation programming mode makes [application development] simpler
After understanding the basic goals of Serverless, we need to discuss how Serverless function calculation can achieve these two goals. First, we must evaluate the goal that function computing can achieve from the programming mode of function computing, which is an event driven fully managed computing service.
Using function calculation, users do not need to purchase and manage infrastructure such as servers, but only write and upload code. Function computing prepares computing resources for you, runs tasks flexibly and reliably, and provides log query, performance monitoring, alarm and other functions.
Develop independent function units according to function granularity, debug quickly, deploy and launch quickly, saving a lot of resource purchase and operation and maintenance of environment construction; At the same time, function computing is an event driven model. Event driven means that users do not need to pay attention to the problem of service product data transmission, which saves us a lot of service access link logic involved in coding; Several dimensional features, such as "event driven"+"function granularity development"+"server free operation and maintenance", help function computing support to focus more on the underlying logic of business logic development, achieve real technology upgrading and improve R&D efficiency.
Function calculation programming mode makes [application running cost] lower
In addition to the improvement of R&D efficiency brought by the development mode, let's look at how function computing can implement the underlying logic that helps customers reduce costs. According to the user's request, paying according to the model of user traffic is the most ideal state. However, paying according to the user's request has a huge technical challenge. It requires that the startup of the function computing instance is less than the user's RT requirements, and the cold start performance is particularly important. At this time, extreme elasticity becomes the underlying technical support for Serverless to pay on demand and reduce business costs. Function computing helps Serverless function computing realize the real underlying cost reduction logic through the "extreme elasticity"+"pay as needed" model.
Function to calculate the out of the box atomization capability of products
Whether for cloud developers or enterprise customers trying to upgrade their business, the three concepts of Serverless, "pay as you go, no server operation and maintenance, and extreme flexibility", have almost become popular. But what and how Serverless can do is still the most common voice around us.
At the initial stage of R&D of Serverless, the technical team usually focuses more on flexibility and cold start acceleration, hoping to highlight the technical competitiveness of the product through flexibility, establish the leading position of the product in the market, and rely on these capabilities to implement the technical goal of extreme flexibility of Serverless, so as to attract developers and enterprise customers to use Serverless. At this stage, we rely more on technical influence to guide us to explore Serverless.
With the deepening of our understanding of Serverless and the improvement of its flexibility, we will think more about other values beyond the flexibility of Serverless when customers need to use it in the production environment without essential changes. At this time, elastic coverage will no longer be limited to computing resources, but also include network, storage and other related resources. Elasticity, as the basic capability of the system, will permeate all aspects of the product. It is necessary to consider more from a systematic perspective what Serverless can do, what it can bring to customers, and how to focus the business on those parts that have to be customized.
Before answering what Serverless can do and how to do it, let's first understand what Serverless can do and what out of the box capabilities it can provide.
Function computing - connector of cloud products
As a computing platform, Serverless Function Computing (FC) is not an isolated island. Only when it is linked with other products of the entire cloud computing ecosystem to form a distributed cloud development environment, can it maximize the value of function computing and meet the needs of enterprise customers to build business systems based on it.
The greatest value of linkage is to solve the connection problem of the services behind the cloud products. It is also the foundation of the event driven architecture of Serverless function computing. The value of event driven is to help users understand the hidden call logic with a more intuitive way. These calls do not need to be reflected in the user's business logic. The connection also means the dependency between systems. This dependency will eventually be reflected through the degree of coupling. Coupling cannot indicate the strength of functional dependency. It is more reflected in the implementation level of software architecture, which is the result of implementation. This is also the "high cohesion, low coupling" implementation requirement emphasized in the software architecture field.
Event driven meets this implementation requirement in an event driven way. Based on this architecture, the internal implementation of software is no longer like the previous typical single application or traditional microservice implementation, and it needs to rely heavily on integrating multiple service dependent clients into its own business system.
In the cloud based development environment, the services carried by cloud products are relatively cohesive and play an important role in the cloud native architecture. The event notification mechanism between cloud products can help customers better build their own cloud native business system based on multiple cloud products. Otherwise, the watch event between cloud products is very complex and expensive. In addition to the development efficiency brought by the product connection, when the user subscribes to an event and provides processing logic, the customer has potentially filtered out the event requests that do not need to be processed. Event driven means that each event request is actually an effective driver.
At present, function computing has built a complete event ecosystem through integration with multiple cloud products, including API gateway, message middleware MQ, object storage OSS, table storage Tablestore, log service SLS, CDN, big data datahub, cloud call, etc. At the same time, it also accesses the operation and maintenance events (log audit, cloud monitoring, product operation and maintenance) of Alibaba Cloud's entire cloud products through EventBridge, Help customers use function computing and many cloud products to form a cloud native business system based on event driven architecture.
Function computing -- efficient message ecological event driven model
With the characteristics of asynchronous decoupling and peak clipping, message products have become an essential part of the Internet distributed architecture. Serverless function computing has its own application scenarios. For the ecological integration of message products, function computing has been specially built at the architecture level. Based on the EventStreaming channel capacity provided by the EventBridge product, a general message consumption service, Poller Service, has been built, Based on this architecture, users are provided with RocketMQ, Kafka, RabbitMQ, MNS and other message type triggering capabilities.
Service the logic of consumption, separate the platform from the business logic, and separate the consumption logic from the processing logic. Convert the message pull model of traditional architecture to the event driven push model of Serverless, which can support the calculation logic of message processing carried by function computing, and achieve serverless message processing. Based on this architecture, it can help customers solve the problem of integrated connection of message clients, simplify the implementation of message processing logic, and achieve dynamic expansion of resources and reduce user costs for the peak and trough business model.
Function calculation - out of the box asynchronous task processing capability
The figure below shows the basic model of a typical asynchronous task processing system, which submits, schedules, executes, and finally delivers the execution results through API.
In the traditional task processing framework, the ability of task scheduling, load balancing and flow control strategy is usually built based on the service gateway, which is also the most basic, but most core, and most complex part in the construction of the distributed system, which requires the most manpower investment; The back-end implementation is usually based on the memory queue of process granularity and the thread pool model at the runtime level to complete the specific task dispatch and execution. The thread pool is usually closely related to the selected programming language Runtime, and the system architecture is highly dependent on the programming development language.
The process of the Serverless asynchronous task processing system is as follows: users distribute tasks through APIs. After the requests arrive at the Serverless service gateway, they are stored in the asynchronous request queue. Async Service will start to take over these requests, and then request scheduling to obtain back-end resources. These requests will be allocated to specific back-end resources for execution.
In this architecture diagram, Async Service is responsible for the implementation of request Dispatcher, load balancing, flow control strategy and resource scheduling in the traditional architecture. At this time, the function cluster is equivalent to an abstract distributed thread pool model. Under the function computing model, instances are isolated from each other, and resources have the ability to scale horizontally. With the function, the overall resource pool capacity is calculated, It can avoid the thread pool capacity problem and resource scheduling bottleneck problem caused by the single machine resource limitation under the traditional application architecture. At the same time, the task execution environment is not limited by the overall business system runtime, which is the advantage of Serverless asynchronous task system over traditional task systems.
From the perspective of the architecture of the Serverless task processing system, its processing logic is very simple. Most of the capabilities that the distributed system relies on are transparently implemented by the Async Service system role. For users, more of the task processing logic is provided through functional programming. The overall architecture avoids the dependency on the language based runtime thread pool, The whole function computing cluster provides an "unlimited" capacity "thread pool". Through the service mode, users only need to submit requests, and other concurrent processing, flow control and backlog processing are all completed by the Serverless platform. Of course, in the actual execution process, it is necessary to configure the concurrency of asynchronous task processing, error retry strategy and result delivery based on the business characteristics.
Function calculation - out of the box observational capability
After functions provide a lot of out of the box capabilities, the most urgent need for customers is to observe these out of the box capabilities: how to reveal some indicators and operation states that users care about in the face of customer development and debugging, business logic optimization, system stability, metering and billing and other needs?
We need to provide customers with an out of the box observational capability support. Especially in the initial transition stage when customers use Serverless, it is very necessary to show the black box information of the system to customers, and improve the transparency of product billing. Based on the current out of the box observation capability, we can clearly see the whole task processing process and consumed computing resources.
How can enterprises use FC to rapidly expand business systems?
Next, I will focus on how, in the actual business system, enterprises can use these atomization capabilities provided by function computing to rapidly expand their own business systems, so that Serverless can truly become a reliable dependency for enterprises to achieve business system extension and architecture upgrade.
Atomization capability of fast integration function calculation of enterprise system
We said at the beginning that we want to promote the business system All on Serverless, so that the entire business system of the enterprise can run on Serverless. But realistically, this goal is still very challenging at this stage. We are still in a relatively early transition stage. We hope to provide some suggestions on best practices to help the enterprise build on the existing system, We can use the atomization ability provided by the Serverless system to achieve the goal of cost reduction and efficiency increase, introduce the Serverless technology into our own business system, and experience the business value that Serverless can bring through continuous use of Serverless.
How to quickly integrate such capabilities with existing systems? Function computing provides SDK, HTTP URL and a variety of event driven access methods, and can use function computing to support VPC capabilities, opening up the network space of function computing and the customer's existing systems. At the same time, in terms of business support, the function provides a variety of dimensions of runtime capabilities, including the official standard runtime, support for user-defined Customer Runtime, and the customer container image deployment mode integrated with the container ecosystem, which minimizes the threshold for the customer business system to run on the Serverless platform. Compared with the container ecosystem, function computing provides a very significant image warm-up acceleration capability to help quickly start the customer's business system to provide external services.
Split the task/job class business processing logic
Split the task/job class business processing logic: In some microservice architecture business systems, the ability of Serverless asynchronous/asynchronous tasks is used to meet the system task processing requirements. Function calculation provides HTTP, SDK, timing, event triggering and other integration methods for users to submit requests and execute related tasks.
Split MQ business message processing logic
Split the MQ business message processing logic: There are usually many business subsystems linked by the message middleware in the enterprise business system. The function calculation provides the message cloud product event triggering capability. The original logic of listening to the message queue and actively pulling messages for consumption is replaced by Serverless triggers. The message processing logic is assumed by the function calculation, The event driver is used to decouple the message consumption and message processing, and the reliable consumption capability provided by the event driver is unified to achieve the serverless message processing logic.
Split file classes to process business logic
Split file class processing business logic: For some file and video processing businesses, data flows between the file system and the DB. Use OSS triggers and DB class triggers (OTS) provided by function computing to quickly complete the relevant data processing logic in an event driven manner.
Split the business logic of the data processing class
Split the business logic of the data processing class: It is hoped that some business logic of the data processing class can be processed by using the Serverless ETL capabilities provided by the message product, and the source end and target end can be rapidly expanded by using function computing according to the business needs.
Customer scenario case analysis
Next, I will share some actual user Serverless scenarios.
Algorithm task
The following is a typical business scenario architecture in the algorithm field: for some algorithm tasks, high-performance computing, AI related reasoning tasks, advertising image recognition, and intelligent operation and maintenance capabilities.
Usually, these parts belong to relatively independent parts of the business system. Using function calculation, an algorithm model for reasoning is rapidly deployed, and pictures or a group of parameters required for reasoning are referred to the function by request. If you want to introduce a layer of decoupling logic, you can use MQ, and then use MQ events to trigger task execution. The final results will be submitted to OSS, and the generated files can be further processed by using event drivers.
Consumer Electronics
In the consumer electronics field, the customer Serverless solution, IOT related video transmission, collected data from IOT for further analysis, and finally supported the client to consume these video data.
Mutual entertainment industry
The following is a Serverless scenario case of microblogging for image access and processing. It uses function calculation to achieve cold data access and personalized image processing.
Education industry
The following is the Serverless solution for the education industry. There is an obvious phenomenon in the education industry. There is a common demand for encoding rebroadcasting, live recording, and live content review. During the live broadcast, the legitimacy of the live content needs to be reviewed through frame cutting.
Entertainment industry
The following is the Serverless solution for the entertainment industry, which aims at dynamic frame cutting audit of movie content and slice transcoding. The figure below shows a technical solution of Pumpkin Movie, which uses function calculation to realize such a capability.
Game industry
The following is a typical application case of function computing in the game industry, covering data processing, battle settlement, game contracting and other scenarios in the game industry. Several leading customers in the game industry are already using such a combination of Serverless solutions to implement their own business systems. These scenarios are very special. Take the battle settlement as an example. It is not necessary to continuously calculate in real time during the task execution of your replica. Usually, you need to calculate only when a replica may be about to end, or when you need to fight during the task execution of the replica. This is a typical Serverless application scenario.
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00