×
Community Blog New Frontend Technology Revolution Triggered by Serverless

New Frontend Technology Revolution Triggered by Serverless

This article discusses the evolution of frontend development model and highlights the best practices in Serverless development and function performance.

By Jiang Hang (Zhuanggong), a frontend engineer at Alibaba Cloud

1) Evolution of the Frontend Development Model

The evolution of the frontend development model consists of four main phases.

1.1 Dynamic Page Rendering Based on Templates

In the early Internet age, web pages were simple. They were static or dynamic pages for displaying and disseminating information. At that time, it was easy to develop webpages. Technologies such as Java Server Page (JSP) and Professional Hypertext Preprocessor (PHP) were used to develop dynamic templates, and then web servers were used to parse the templates into HTML files. Browsers only needed to render these HTML files. In this phase, there was no frontend and backend division, and backend engineers usually wrote the frontend pages.

1.2 Frontend and Backend Division Based on AJAX

The Asynchronous JavaScript and XML (AJAX) technology was introduced in 2005 It opened a new chapter in web development. Based on AJAX, the web could be divided into frontend and backend. The frontend interacted with pages and the backend processed business logic. The frontend and the backend exchanged data through interfaces. It was no longer necessary to write hard-to-maintain HTML using various backend languages. The complexity of web pages also shifted from the backend web server to the browser JavaScript. Therefore, the role of frontend engineers came into being.

1.3 Frontend Engineering Based on Node.js

Node.js introduced in 2009 was a historical moment for frontend engineers. At the same time, the CommonJS specification and the Node.js package manager (NPM) were also proposed. Subsequently, a series of Node.js-based frontend development tools such as Grunt, Gulp, and Webpack have emerged.

Around 2013, the first versions of React.js, Angular, and Vue.js were released one after another. Then, page-based development transitioned to component-based development. After development, tools such as Webpack could be used for packaging and building, and build results could be published through a command line tool based on Node.js. Frontend development became normalized, standardized, and engineering-oriented.

1.4 Full-stack Development Based on Node.js

Node.js is significant because it allows JavaScript, which previously ran only in browsers, to run on servers. Therefore, frontend engineers used Node.js for full-stack development and began to become full-stack engineers.

On the other hand, both the frontend and backend were developing further. Almost from the time of Node.js' birth, the backend began changing from the monolithic application model to the microservices model. This led to a divergence in the frontend and backend division of labor. With the rise of the microservices model, backend interfaces gradually became atomic. Microservice interfaces were no longer directly oriented to pages, and frontend calls became complicated.

In response, the Backend For Frontend (BFF) model was developed. A BFF layer was added between the microservices and frontend. BFF aggregated and tailored the interfaces and then output the interfaces to the frontend. The BFF layer did not assume the underlying backend tasks and was more closely related to the frontend. Therefore, frontend engineers selected Node.js to implement the BFF. This was also part of the widespread application of Node.js on the server-side.

1.5 Summary

Revolutionary technologies have driven every change in the frontend development model. First, there was AJAX, and then Node.js. So what is the next revolutionary technology? Obviously, it is Serverless.

2) Frontend Solutions in Serverless Services

2.1 Introduction to Serverless

Cloud Native Computing Foundation (CNCF) defines Serverless as the concept of building and running applications that do not require server management.

In fact, Serverless is already associated with the frontend, but you may not be aware of it. Take Alibaba Cloud Content Delivery Network (CDN) as an example. After you publish static resources to the CDN, you do not need to consider the quantity and distribution of the CDN nodes or how they implement load balancing and network acceleration. Therefore, the CDN is Serverless for frontend engineers.

Object Storage Service (OSS) is similar to CDN. You only need to upload files to OSS and then use them directly without considering how the service accesses files or controls their permissions. Therefore, the OSS is also Serverless for frontend engineers. Some third-party API services are also Serverless because you do not need to consider servers while using these services.

However, this intuitive understanding is not enough. We need an accurate definition. From a technical perspective, Serverless is the combination of Function as a Service (FaaS) and Backend as a Service (BaaS).

Serverless = FaaS + BaaS.

1

In short, FaaS indicates platforms that run functions, such as Alibaba Cloud Function Compute and AWS Lambda. BaaS indicates backend cloud services, such as cloud databases, object storage services, and message queues. BaaS greatly simplifies application development.

Think Serverless as a function that uses BaaS and runs in FaaS.

Serverless services include the following features:

  • Event-driven: On the FaaS platform, a series of events drive a function.
  • Stateless: Each time a function is executed, different containers may be used and memory and data cannot be shared. To share data, you must use a third-party service such as Redis.
  • No maintenance: By using Serverless, you do not need to worry about servers and their operations and maintenance (O&M)—also a core idea of Serverless.
  • Low cost: Using Serverless services is relatively low cost because you only pay for running each function. If no function is executed, no cost will incur and no server resource usage occurs.

2.2 Architecture of Frontend Solutions in Serverless Services

2

The preceding figure shows major Serverless services and corresponding frontend solutions. Infrastructure and development tools are displayed from the bottom up. Cloud computing vendors provide the infrastructure, including cloud computing platforms and various BaaS services, as well as FaaS platforms that run functions. The frontend includes Serverless users. Therefore, development tools are most important for the frontend. They can be used to develop, debug, and deploy Serverless services.

2.3 Framework

Common Serverless frameworks include Serverless Framework, ZEIT Now, and Apex. However, foreign companies developed these frameworks, and no such frameworks were there in China. Moreover, there is no unified Serverless standard yet. Therefore, Serverless services provided by different cloud computing platforms may be different. As a result, the code cannot be smoothly migrated. The Serverless framework is designed to simplify the development and deployment of Serverless services as well as shield the differences between different Serverless services. In this way, functions may run in other Serverless services without or with a slight alteration.

2.4 Web IDE

Web Integrated Development Environments (IDEs) are closely related to Serverless and are also used for cloud computing platforms. By using a Web IDE, it's easy to develop and debug functions in the cloud and directly deploy the functions to corresponding FaaS platforms. In this way, there is no need to install multiple development tools or configure various environments locally. Common Web IDEs include AWS's Cloud9, Alibaba Cloud's Function Compute Web IDE, and Tencent Cloud's Cloud Studio. AWS's Cloud9 provides the best user experience.

2.5 Command Line Tools

Currently, the most common development method is local development. Therefore, a command line tool is required to locally develop Serverless.

There are two types of command line tools:

1) Tools provided by cloud computing platforms, such as AWS's AWS, Azure's AZ, and Alibaba Cloud's Fun.
2) Tools provided by Serverless frameworks, such as Serverless and NOW. Serverless, Fun and most other tools are implemented with Node.js.

The following provides several examples of command line tools:

  • Creation

Copy code

col 1col 2# serverless$ serverless create --template aws-nodejs --path myService# fun$ fun init -n qcondemo helloworld-nodejs8
  • Deployment

Copy code

col 1col 2# serverless$ serverless deploy# fun$ fun deploy
  • Debugging

Copy code

col 1col 2# serverless$ serverless invoke [local] --function functionName# fun$ fun local invoke functionName

2.6 Scenarios

Above the development tool layer, there are some vertical application scenarios of Serverless. In addition to traditional server development, Serverless technology is currently used for developing applets and may also be used for designing the Internet of Things (IoT) applications in the future.

2.7 Comparison of Different Serverless Services

3

The preceding figure compares different Serverless services in terms of supported languages, triggers, and prices. The results show that the services have both differences and similarities.

  • Almost all Serverless services support languages such as Node.js, Python, and Java.
  • Almost all services support triggers such as HTTP, object storage, scheduled tasks, and message queues.
  • These triggers are also related to a platform's backend services. For example, Alibaba Cloud's object storage trigger is triggered by events such as access to Alibaba Cloud OSS, while AWS's object storage trigger is triggered by AWS S3 events. The two platforms are not universal. Non-uniform standards also count as a problem in Serverless.
  • Almost all platforms use the same billing method. As mentioned earlier, Serverless services are charged based on the number of calls. Each Serverless service provides 1 million free calls each month, followed by about RMB 1.3 per million calls, or provides free execution of 400,000 GB/s, followed by RMB 0.0001108 per GB/s. Therefore, it is cost-effective to use Serverless when the application volume is small.

3) Frontend Development Based on Serverless

This section uses several cases to describe frontend development based on Serverless, and how it differs from traditional frontend development.

First, let's review the traditional development process.

4

In the traditional development process, frontend engineers write code for pages, and backend engineers write code for interfaces. After backend engineers write code for interfaces and deploy the interfaces, frontend and backend engineers work together to debug the interfaces. After debugging, the interfaces are tested and published. Then, O&M engineers maintain the system. The whole process involves different roles in a long process. Therefore, communication and coordination are also difficult.

Based on Serverless, we can simplify backend development. Traditional backend applications are split into functions. Backend engineers only need to write functions and deploy the functions to Serverless services. Subsequently, no server O&M is required, which significantly lowers the threshold for backend development. Therefore, only one frontend engineer is needed to complete all the development.

5

When frontend engineers write backend code based on Serverless, they require some familiarity with the backend. In the case of complex backend systems or scenarios where Serverless is inapplicable, backend development is still required and the backend is pushed even further back.

4) BFF Based on Serverless

On the one hand, different APIs are required for different devices. On the other hand, microservices make the calling of frontend interfaces more complicated. Therefore, frontend engineers began to use BFF to aggregate and tailor the interfaces to make them better suited to the frontend.

The following figure shows a universal BFF architecture.

6
Figure source: https://www.thoughtworks.com/insights/blog/bff-soundcloud

The bottom layer includes various backend microservices, and the top layer includes various frontend applications. BFF is located before microservices and applications and is often developed by the frontend engineers. Such an architecture resolves the problem of interface coordination but causes certain other problems. For example, if a BFF application is developed for each device, repeated development may be needed. Moreover, in the past, frontend engineers only needed to develop pages and focus on browser rendering. Now, they must maintain various BFF applications. In the past, frontend engineers did not need to consider concurrency, but now the concurrent pressure is concentrated on BFF. In general, the O&M cost is high and frontend engineers are not adept at O&M.

Serverless resolves these problems. It allows using functions to aggregate and tailor interfaces. A request sent by the frontend to BFF can be considered as an HTTP trigger for FaaS, which triggers the execution of a function that implements the business logic for the request. For example, use a function to call multiple microservices to obtain data and then return the processing results to the frontend. In this way, the O&M pressure is shifted from traditional BFF servers to FaaS services, and frontend engineers do not need to worry about the servers anymore.

7

The preceding figure shows the BFF architecture based on Serverless. To better manage APIs, add a gateway layer to manage all APIs. For example, use an Alibaba Cloud gateway to divide the APIs into groups and different environments. Based on the API gateway, the frontend does not directly use an HTTP trigger to execute a function. Instead, the frontend sends a request to the gateway, which then triggers a specific function for execution.

5) Server Rendering Based on Serverless

Looking at the three most popular frontend frameworks, React.js, Angular, and Vue.js, most rendering is currently implemented by clients. When pages are initialized, only a simple HTML and corresponding JaveScript file are loaded, and then JavaScript renders the pages one by one. The major problem with this method lies in the white screen time and Search Engine Optimization (SEO).

To solve these problems, frontend engineers are trying server rendering. The core idea of this approach is similar to the earliest template rendering. Specifically, the frontend initiates a request and the backend server parses an HTML file and then returns the HTML file to a browser. However, in the past, we used templates of server languages such as JSP and PHP, while now, homogeneous applications are implemented based on React and Vue. This is also the advantage of today's server rendering solution.

However, server rendering brings additional O&M costs to the frontend because frontend engineers need to maintain servers for rendering. The biggest advantage of Serverless is that it reduces O&M operations. Does this mean Serverless can be used for server rendering? Yes, why not!

In traditional server rendering, the path of each request corresponds to a server router, and this router enables the rendering of the HTML file in the corresponding request path. Server applications for rendering are the applications that integrate these routers.

When Serverless is used for server rendering, each router is split into different functions and then the functions are deployed in FaaS. Each user request path corresponds to an independent function. In this way, the O&M is transferred to the FaaS platform, and frontend engineers perform server rendering without maintaining and deploying the server application.

8

ZEIT's Next.js does a good job of implementing server rendering based on Serverless. The following is a simple example. The code structure is as follows:

Copy code

col 1col 2├── next.config.js├── now.json├── package.json└── pages    ├── about.js    └── index.js// next.config.jsmodule.exports = {  target: 'serverless'}

Here, pages/about.js and pages/index.js are two pages, and next.config.js is configured to use ZEIT's Serverless services. Then, the now command is used to deploy the code in a Serverless manner. During the deployment, pages/about.js and pages/index.js are changed to two functions for rendering the corresponding pages.

6) Applet Development Based on Serverless

At present, Serverless is mostly used for developing applets in China. One specific implementation is cloud-based applet development. Both Alipay applets and WeChat Mini Programs provide cloud-based development.

In traditional applet development, frontend engineers are responsible for applet development, while backend engineers are responsible for server development. The backend development of applets is essentially the same as that of other backend applications. Backend engineers need to focus on a series of deployment and O&M operations, such as load balancing, backup and disaster recovery, and monitoring and alerting for applications. If the development team is small, frontend engineers need to implement the server.

However, in cloud-based development, developers only need to focus on business implementation, and a single frontend engineer may develop the frontend and backend of the entire application. In cloud-based development, the backend is encapsulated into BaaS services and a corresponding SDK is provided for developers. The developers use various backend services in the same way they would call functions. In addition, application O&M is shifted to cloud development service providers.

9

The following provides several examples of Basement cloud development based on Alipay. Functions are defined in FaaS services.

  • Database operations

Copy code

col 1col 2// basement 是一个全局变量// 操作数据库basement.db.collection('users')    .insertOne({        name: 'node',        age: 18,    })    .then(() => {        resolve({ success: true });    })    .catch(err => {        reject({ success: false });    });
  • Upload images

Copy code

col 1col 2// 上传图片basement.file    .uploadFile(options)    .then((image) => {        this.setData({            iconUrl: image.fileUrl,        });    })    .catch(console.error);
  • Call a function

Copy code

col 1col 2// 调用函数basement.function    .invoke('getUserInfo')    .then((res) => {         this.setData({             user: res.result        });    })    .catch(console.error}

7) Universal Serverless Architecture

Based on the preceding Serverless development examples, it's possible to generalize a universal Serverless architecture.

10

The bottom layer implements backend microservices used for complex businesses. The FaaS layer implements business logic through a series of functions and directly provides services for the frontend. Frontend developers implement server logic by writing functions. For backend developers, the backend is pushed back even further. If the business is lightweight, the FaaS layer implements the business logic, and even the microservice layer is unnecessary.

In addition, the cloud computing platform provides the BaaS services at the backend, FaaS, or frontend, which greatly reduces the development difficulty and cost. In cloud-based applet development, BaaS services are directly called at the frontend.

8) Best Practices in Serverless Development

The biggest difference between Serverless development and traditional development is that traditional development is based on applications. After the development is completed, unit testing and integration testing must be performed on the applications. In Serverless development, functions are developed. So we need to ask: How to test Serverless functions? What are the differences between Serverless function testing and ordinary unit testing?

Another important point is the performance of applications developed based on Serverless. How to improve the performance of applications developed on the basis of Serverless?

This section introduces the best practices in testing Serverless functions and improving function performance.

8.1 Function Testing

Although Serverless enables us to simply develop business applications, its features pose some challenges in terms of testing. These challenges are:

  • Serverless functions are distributed. You do not know and do not need to know on which hosts the functions are deployed or running. Therefore, you need to perform unit testing on each function.
  • A Serverless application is a group of functions that may depend on other backend services (BaaS). Therefore, you must perform an integration test on the Serverless application.
  • It is also difficult to locally simulate FaaS and BaaS that run functions.
  • FaaS environments and BaaS service SDKs or interfaces may vary with different platforms. This may cause some problems in testing and also increases the application migration cost.
  • The function execution is event-driven. It is difficult to locally simulate events that drive function execution.

So how can these problems be solved?

According to Mike Cohn's test pyramid, unit testing has the lowest cost and the highest efficiency, and UI testing (integration testing) has the highest cost and the lowest efficiency. Therefore, we recommend performing as many unit tests as possible to reduce the number of integration tests. This also applies to Serverless function testing.

11
Figure source: https://martinfowler.com/bliki/TestPyramid.html

To simplify unit testing on functions, separate the business logic from the function-dependent FaaS (such as Function Compute) and BaaS (such as cloud databases). After FaaS and BaaS are separated, test the business logic of a function in the same way as traditional unit testing. After that, write integration tests to verify whether the function works properly when integrated with other services.

8.2 A Bad Example

The following example shows how to implement a function using Node.js. The function is used to store user information in a database and then send an email to the user.

Copy code

col 1col 2const db = require('db').connect();const mailer = require('mailer'); module.exports.saveUser = (event, context, callback) => {  const user = {    email: event.email,    created_at: Date.now()  }   db.saveUser(user, function (err) {    if (err) {      callback(err);    } else {      mailer.sendWelcomeEmail(event.email);      callback();    }  });};

This example has two main problems:

  • The business logic is coupled with FaaS. The business logic is in the saveUser function, and the event and content objects of the saveUser parameter are provided by FaaS.
  • The business logic is coupled with BaaS. Specifically, the db and the mailer backend services are used in the function, so the test function must depend on the db and the mailer services.

8.3 Write a Testable Function

Refactor the preceding code by separating the business logic from the function-dependent FaaS and BaaS.

Copy code

col 1col 2class Users {  constructor(db, mailer) {    this.db = db;    this.mailer = mailer;  }   save(email, callback) {    const user = {      email: email,      created_at: Date.now()    }     this.db.saveUser(user, function (err) {      if (err) {        callback(err);      } else {        this.mailer.sendWelcomeEmail(email);        callback();      }  });  }} module.exports = Users;const db = require('db').connect();const mailer = require('mailer');const Users = require('users'); let users = new Users(db, mailer); module.exports.saveUser = (event, context, callback) => {  users.save(event.email, callback);};

In the refactored code, all the business logic is put in the Users class, which does not rely on any external service. During testing, you do not have to input the real db or mailer, input a simulated service, instead.

The following is an example of a simulated mailer.

Copy code

col 1col 2// 模拟 mailerconst mailer = {  sendWelcomeEmail: (email) => {    console.log(`Send email to ${email} success!`);  },};

In this way, as long as unit testing is fully performed on the Users class, the business code runs as scheduled. After that, note whether the entire function runs normally by inputting the real db and mailer for simple integration testing.

The refactored code also facilitates function migration. To migrate a function from one platform to another, just modify the calling method of the Users class according to the parameters provided by different platforms, without modifying the business logic.

8.4 Summary

To sum up, keep the pyramid principle in mind while testing functions, and observe the following principles:

  • Separate the business logic from the function-dependent FaaS and BaaS.
  • Perform full unit testing on the business logic.
  • Perform integration testing on functions to verify that the code works properly.

9) Function Performance

When Serverless is used for development, another concern is the function performance. Traditional applications reside in memory once they are started. However, this is not the case with Serverless functions. When an event that drives function execution occurs, you need to download the code, start a container, start a runtime environment in the container, and then execute the code. The first few steps are collectively referred to as a cold start. Traditional applications do not go through the cold start process.

The following figure shows the lifecycle of a function.

12
Figure source: https://www.youtube.com/watch?v=oQFORsso2go&feature=youtu.be&t=8m5s

The cold start time is a key performance index of a function. To optimize the performance of the function, you need to optimize each stage in the function lifecycle.

9.1 Impact of Different Programming Languages on the Cold Start Time

Many people have already tested the impact of different programming languages on the cold start time. For example:

  • Compare cold start time with different languages, memory and code sizes by Yan Cui
  • Cold start/Warm start with AWS Lambda by Erwan Alliaume
  • Serverless: Cold Start War by Mikhail Shilkov

13
Figure source: Cold start/Warm start with AWS Lambda

The following conclusions are drawn from these tests:

  • Increasing the memory size of functions helps reduce the cold start time.
  • The cold start time of programming languages such as C# and Java is about 100 times that of Node.js and Python.

Based on the preceding conclusions, if you require that the cold start time of Java be as short as that of Node.js, you can allocate more memory to Java. However, a larger memory means a higher cost.

9.2 Cold Start Time Points of Functions

Developers new to Serverless may mistakenly think that each function execution requires a cold start. In fact, this is not the case.

When the first request (the event that drives the function execution) is received, a runtime environment starts successfully and the function is executed. Then, the runtime environment is retained for a period of time, during which it is used to execute subsequent functions. This reduces the number of cold starts and the function execution time. When the number of requests reaches the limit of the runtime environment, the FaaS platform automatically adds the next runtime environment.

14

Take AWS Lambda as an example. After a function is executed, Lambda maintains the execution context for a period of time, during which it is used for subsequent Lambda function calls. In effect, the service freezes the execution context after the Lambda function is completed. If AWS Lambda chooses to reuse the context when the Lambda function is called again, the context is unfrozen for reuse.

The following provides two small tests to illustrate the above content.

I used Alibaba Cloud Function Compute to call a Serverless function and drive the function through an HTTP event. Then I initiated 100 requests to the function at different concurrency settings.

At first, the concurrency was 1:

15

In this case, the time required for the first request was 302ms, and the time required for the remaining requests was about 50ms each. This tells that a cold start was performed for the function corresponding to the first request, and a warm start was used for the remaining 99 requests because the runtime environment of the first request was reused.

Then, I set the concurrency to 10:

16

In this case, the time required for the first 10 requests was 200ms to 300ms, and the time required for the remaining requests was about 50ms each. This shows that a cold start was used for the first 10 concurrent requests, and 10 runtime environments were started at the same time. A warm start was used for the remaining 90 requests.

This also demonstrates our previous conclusion that a function does not cold start every time, but can reuse a previous runtime environment within a certain period of time.

9.3 Reuse Execution Context

Does the preceding conclusion help us improve the performance of functions? Yes, of course. Since a runtime environment can be retained, the execution context in the runtime environment can be reused.

Here is an example:

Copy code

col 1col 2const mysql = require('mysql'); module.exports.saveUser = (event, context, callback) => {     // 初始化数据库连接    const connection = mysql.createConnection({ */* ... */* });    connection.connect();     connection.query('...'); };

The preceding example uses the saveUser function to initialize a database connection. In this case, the database connection is re-initialized during each function execution, so it takes some time to connect to the database each time. Obviously, this is not good for the performance of the function.

Since the execution context of the function can be reused within a short period of time, the database connection can be placed outside the function.

Copy code

col 1col 2const mysql = require('mysql'); // 初始化数据库连接const connection = mysql.createConnection({ */* ... */* });connection.connect();  module.exports.saveUser = (event, context, callback) => {     connection.query('...'); };

In this case, the database connection is initialized only when the runtime environment is started for the first time. When a subsequent request comes in and the function is executed, the connection in the execution context can be directly reused to improve the performance of the function.

In most cases, it is perfectly acceptable to sacrifice the performance of one request in exchange to improve the performance of most requests.

9.4 Preload Functions

Since the runtime environment of a function is retained for a period of time, cold start a runtime environment at a regular interval by actively calling the function. This allows hot start for all other normal requests, avoiding the impact of the cold start time on the function performance.

This method is relatively effective at present, but pay attention to the following points:

  • Do not call a function too frequently. I recommend an interval of more than five minutes.
  • Directly call a function instead of indirectly calling the function using a gateway.
  • Create a function specifically used for preloading calls, instead of using a normal business function.

This is an effective and advanced solution. However, if your business can afford a lower startup speed for the first request, this solution is unnecessary.

9.5 Summary

In general, optimizing the performance of a function means optimizing the cold start time. The preceding solutions are optimizations that developers implement. Of course, the performance can also be improved on a FaaS platform.

Pay attention to the following points when using the preceding solutions:

  • Select a programming language with a short cold start time, such as Node.js or Python.
  • Allocate sufficient memory for function execution.
  • Reuse execution context.
  • Preload functions.

10) Summary

Frontend engineers have been discussing the boundaries of the frontend. The frontend development of today is very different than in the past. Currently, frontend engineers develop web pages, applets, applications, desktop applications, and even servers. Frontend engineers are constantly expanding their boundaries and exploring more fields because they want to create more value. It is best to create value by using familiar tools and methods.

The Serverless architecture gives maximum assistance to frontend engineers as they work to achieve their goals. With Serverless, you no longer need to focus on server O&M or unfamiliar fields. You only need to focus on business development and product implementation. Serverless will certainly bring great changes to the frontend development model, and frontend engineers again will play the role of application engineers. To sum up Serverless in one sentence, we could say "Less is More."

0 1 1
Share on

Alibaba Cloud Serverless

97 posts | 7 followers

You may also like

Comments

Alibaba Cloud Serverless

97 posts | 7 followers

Related Products