×
Community Blog Five Trends in Cloud-Native

Five Trends in Cloud-Native

Li Xiang and Zhang Lei, two Senior Technical Experts at Alibaba Cloud, summarize the concept, development, and future trends of cloud-native in five main points.

By Li Xiang and Zhang Lei, Edited by Heyi

The future development trends in cloud-native can be summarized in five main points:

  1. Kubernetes will follow the Android model: Kubernetes will standardize the infrastructure layer, bringing the value of cloud-native to the application layer.
  2. Applications and capabilities will be provided through Operators.
  3. Middleware capabilities will be further sunk to the infrastructure layer and be decoupled from the business language. Fat clients will be replaced by service access methods based on sidecars and thin clients.
  4. Next-generation DevOps models and experiences based on Kubernetes API resource models will become mainstream.
  5. Infrastructure with intrinsically elastic formats that support unlimited resource pools, strong multitenancy, and intelligence are the future.

The core assumption of cloud-native is that the software of the future must be born and raised on the cloud. The concept of "cloud-native" defines the optimal approach to enable applications to make maximum use of cloud capabilities and the value of the cloud. Therefore, cloud-native is a set of ideas that guide the design of software architectures. First, cloud-native software must be born and mature on the cloud. Second, it must maximize the use of cloud capabilities. In this way, software and the cloud can be organically integrated to achieve the full potential of the cloud.

Cloud-native is familiar to many people in the industry. Many enterprises have implemented practices based on cloud-native architectures and technical concepts. Therefore, we must look at the future trends in the cloud-native field. This will show us how to adapt as cloud-native goes mainstream.

To this end, we reached out to Li Xiang, a senior technical expert from Alibaba's cloud-native division, representative of the CNCF technical supervision committee, and an author of etcd, and Zhang Lei, a senior technical expert at Alibaba Cloud and co-chairman of the CNCF application delivery field. These experts will discuss the concept, development, and future trends of cloud-native, opening doors to new ideas and new horizons.

Their views are presented in the following article.

1. Kubernetes Will Follow the Android Model

Kubernetes is a key project in cloud-native. The rapid development of Kubernetes has made it the cornerstone of the overall cloud-native system. Today, we will look at the development characteristics of Kubernetes. First, Kubernetes is ubiquitous. It is used in clouds, in user-built data centers, and will play a role in scenarios we haven't imagined yet.

Second, all cloud-native users use Kubernetes to deliver and manage applications. Applications are a general concept. Applications can take the form of websites, large e-commerce platforms (like Taobao), AI jobs, computing tasks, functions, and virtual machines (VMs). In all these cases, users can use Kubernetes to deliver and manage applications.

Third, Kubernetes is currently seen as a connecting link. Kubernetes abstracts formatted data to expose infrastructure capabilities, such as service, ingress, pods, and deployment. These are the capabilities that Kubernetes native APIs expose to users. Currently, Kubernetes provides standard interfaces that allow users to access infrastructure capabilities, such as CNI, CSI, DevicePlugin, and CRD. This enables the cloud to act as a capability provider and integrate capabilities into the Kubernetes system through a standardized method.

Kubernetes plays a role similar to Android. Although Android is an operating system installed on devices, it can connect your hardware, mobile phone, TV, car, and other devices to a platform. Android provides a unified set of application management interfaces so you can program applications based on the Android system to access infrastructure capabilities. This is another similarity between Kubernetes and Android.

Finally, Kubernetes itself does not directly generate commercial value. Users do not directly purchase Kubernetes. This is also like Android, as you do not pay for the Android system itself. For both Android and Kubernetes, the real value lies in the upper-layer application ecosystem. Android already has a huge mobile or device application development ecosystem today. Kubernetes is similar, but it is in an earlier stage of its development. We can see that many of the business layers built on Kubernetes today are vertical solutions. It is user-oriented and application-oriented solutions that can really generate commercial value, not Kubernetes itself. That is why we say that Kubernetes' development is similar to Android. Of course, this may also be due to the approach that Google has perfected: promote a free "operating system" and profit from the value of its upper-layer ecosystem, rather than from the operating system itself.

In light of this context, we can summarize the development trends of Kubernetes:

1.1 The Value of the Cloud is Moving Toward the Application Layer

Users use Kubernetes to deliver and manage applications. If Kubernetes continues to grow, all data centers and infrastructure around the world will have a Kubernetes layer. Then, users will naturally start to program, deliver, and manage applications based on Kubernetes. This is similar to how we program mobile apps based on the Android operating system.

In this scenario, most software and cloud products on the cloud will be developed by third parties. Third-party development allows everyone to develop and deliver software on a standard interface. The software can be either proprietary software or cloud products. In the future, more and more third-party open-source projects, such as MongoDB and Elasticsearch, will be developed, deployed, and maintained cloud-natively. Ultimately, they will evolve into cloud services.

1.2 "Pea Pods" Will Emerge on the Cloud

With the Kubernetes standard, developers are faced with an interface similar to an operating system. Many applications are designed for Kubernetes or delivered to Kubernetes. Therefore, a product similar to a "pea pod" is required. As a cloud-based app store or application distribution system, the pea pod can deliver applications to any Kubernetes system in the world without having to worry about compatibility. The principle is the same as using pea pods to deliver any Android application on any Android device.

Google has already tried out this type of product. One example is Anthos, an application delivery platform for hybrid clouds. Although this is a hybrid cloud product, it essentially works by delivering database services, big data services, and other Google Cloud services to any hybrid cloud environment based on Kubernetes. It is equivalent to a pea pod for clouds.

1.3 Open Application Platforms with Kubernetes-Based Scalability Will Replace PaaS

Since the entire application ecosystem of the future will be oriented to Kubernetes, open application platforms with Kubernetes-based scalability will gradually replace traditional PaaS as the mainstream approach. When you build an open application platform with Kubernetes-based scalability, its capabilities are pluggable, and it can deliver and manage various application types. This is more in line with Kubernetes trends and ecosystems. Therefore, we will see the emergence of many platform-layer projects with high scalability.

There is still a large gap between Kubernetes and an ideal cloud-native application ecosystem. This is what the Alibaba Cloud-Native team has been working on. Based on Kubernetes, we want to build a richer application ecosystem at the application layer to meet diverse requirements.

2. Operator-Based Applications and Capabilities

By observing the development of applications and cloud capabilities in the cloud-native era, you will find another trend: Operators. Operators are a concept in Kubernetes. They are basically entities delivered by Kubernetes. These entities have a basic model, which is divided into two parts: a Kubernetes API object (a custom resource definition or CRD for short) and a controller. This is shown in the following figure:

1

Here, we need to distinguish between two concepts: customization and automation. Many people believe they can use Operators for customization. When users need more than Kubernetes' built-in capabilities, they use the scalability provided by Operators to write controllers that can meet their needs. However, customization is only a small part of what Operators do. The core motivation for today's Operator-based applications and capabilities is automation. Moreover, only automation allows us to achieve cloud-native.

This is because the greatest benefit of cloud-native is that it allows us to efficiently use cloud capabilities to their full potential. This is something we cannot achieve manually. In other words, only through automated application development, O&M, and automated interaction with the cloud can the value of cloud-native be truly leveraged.

To automate interactions with the cloud, we must use a plug-in, such as a controller or an Operator, in the cloud-native ecosystem. Currently, the products Alibaba delivers on the cloud, such as PolarDB and OceanBase, use a controller connected to Kubernetes. The controller interacts with the infrastructure and cloud to embed cloud capabilities in these products.

In the future, a large number of cloud applications and their corresponding O&M and management capabilities will be delivered through Kubernetes Operators. In this context, Kubernetes serves as an access layer and standard interface for capabilities. The following figure shows a typical user-side Kubernetes cluster:

2

In a user's Kubernetes system, Kubernetes native APIs only account for the section highlighted in red. Many other capabilities are implemented as plug-ins or Operators. For example, in the preceding figure, all the custom resources and capabilities are developed by third parties and delivered to end-users through Operators. This means, in future cloud-native ecosystems, most applications and capabilities will be based on CRD Operators, not Kubernetes native APIs.

As this trend continues, more and more software and capabilities will be described and defined by Kubernetes Operators. Cloud products will be based on Kubernetes and delivered through Operators.

As more and more Operators emerge, we will need to use a centralized method to solve their potential stability, discoverability, and performance problems. In other words, we will probably need a horizontal Operator management platform to manage all the Kubernetes Operator-based applications and capabilities in a unified manner. This will allow us to serve users in a better and more professional manner.

We will also need to program an Operator for each capability or application. Therefore, developer-friendly Operator programming frameworks are likely to be a major future trend. Such a programming framework would support different languages, including Go, Java, C, and Rust. It would allow programmers to focus on O&M logic, application management, and capability management, rather than the semantics and details of Kubernetes coding.

Finally, the popularization of the cloud-native ecosystem will promote the implementation of cloud services as Operators. Application-layer-oriented cloud services will be defined and abstracted in a standardized manner for use in multi-cluster or hybrid cloud environments. In addition, Operator-based methods will gradually replace infrastructure-as-code (IaC) projects, such as Terraform, in cloud service management and consumption in the cloud-native field.

3. Further Sinking of Application Middleware Capabilities to the Infrastructure

As cloud-native and the overall ecosystem develop, we will see many changes in the application middleware field. Having gone from the original centralized enterprise service bus (ESB) architectures to fat clients, we will now see a gradual evolution to the sidecar-based service meshes that are a hot topic today.

3

Today, cloud capabilities and infrastructure capabilities are constantly expanding. Many things that could only be done through middleware can now be easily achieved through cloud services. Application middleware is no longer a capability provider, but a standard interface through which users can access capabilities. The standard interface is no longer built on fat clients but implemented through the common HTTP and gRPC protocols. The sidecar approach decouples the service access layer from the application business logic. This is the idea of service mesh.

Currently, service mesh only supports traffic governance, routing policies, and access control in traditional middleware. However, the sidecar model can be applied in all middleware scenarios to completely decouple the middleware logic from the application business logic. It sinks application middleware capabilities to the Kubernetes layer. In this way, applications can be more specialized, with more attention paid to the business logic.

In response to this trend, another trend will emerge at the Kubernetes layer: sidecar automation and large-scale O&M capabilities. Since there will be a huge number of sidecars and the application middleware is likely to evolve into sidecar clusters, sidecar management, and large-scale O&M capabilities will be a necessity for clusters and cloud products.

4. Next-Generation DevOps Models and Systems

With the continuous development of the cloud-native ecosystem and popular adoption of the cloud-native concept, DevOps is likely to undergo an essential change that sees the emergence of a new generation of DevOps models and systems. As Kubernetes capabilities grow more powerful and the infrastructure becomes more complex, it will be easy to build application platforms based on this powerful infrastructure. Such application platforms will eventually replace traditional PaaS platforms.

We currently use DevOps because the infrastructure is still not sufficiently powerful, standardized, or practical. Therefore, we need a set of tools for business R&D that can connect developers and infrastructure. For example, since the infrastructure provides capabilities in the form of VMs, we need to turn these VMs into the blue-green release or progressive application delivery systems desired by R&D teams. This previously required a series of DevOps tools and a continuous integration and continuous delivery (CI/CD) pipeline, but now, the situation has changed. The capabilities of the Kubernetes infrastructure are expanding, allowing Kubernetes to provide a blue-green release and other capabilities itself. This will fundamentally alter DevOps in the following ways:

1. Separation of Concerns

In the Kubernetes context, "software" is no longer a single deliverable controlled by the application owner, but a collection of multiple Kubernetes objects. Among these Kubernetes objects, only a small number of objects concern the R&D team, and the application owner is not aware of many objects. This leads to the separation of concerns on the platform. The focus of the R&D team will be completely different from the O&M team or system team. This means that R&D personnel does not have to consider O&M details, such as how to implement blue-green release or horizontal scale-out. They simply need to write and deliver the business code.

As Kubernetes and its infrastructure become increasingly complex and the relevant concepts become more important, it will be impossible for developers to understand all the concepts involved in the platform layer. Therefore, the future cloud-native ecosystem will inevitably be abstracted and stratified. The roles of each layer will only interact with their own data abstractions. The R&D team will have its own declarative API objects, while the O&M team has a different set of declarative API objects. In addition, each layer will have a different focus. This situation will provide the context for the future development of the overall DevOps system.

2. Widespread Adoption of Serverless

Cloud-native focuses on applications. In this context, serverless is no longer an independent scenario and no longer limited to certain vertical fields. Serverless will become a general approach and an intrinsic component of cloud-native application management systems. Let us explain this from two perspectives. First, in terms of capabilities, lightweight O&M, NoOps, and self-service O&M will become mainstream application O&M capabilities. Application management in the cloud-native ecosystem is a lightweight O&M process. This means application O&M is no longer manual and complicated. Rather, it is an out-of-the-box process with very simple modular operations. Both Kubernetes and cloud-native are used to modularize the underlying infrastructure. This is similar to the NoOps concept advocated in the serverless field.

Second, in terms of applications, application descriptions are widely abstracted by users, and the event-driven and serverless concepts are split and generalized. This way, serverless capabilities can be applied in a wide range of scenarios instead of the narrow range of serverless scenarios we have today, such as FaaS and container instances. In the future, all applications will be scalable down to zero servers.

3. Application Layer Technology Based on IaD Will Go Mainstream

First, the idea based on infrastructure-as-data (IaD) will go mainstream. IaD is accomplished by declarative APIs in Kubernetes. The core idea of declarative APIs is to describe infrastructure, applications, and capabilities through declarative files or declarative objects. In this way, the file or object itself is "data." As a result, Kubernetes or the infrastructure layer is driven by data, achieving IaD. This idea can be used to develop many cutting edge technologies, such as GitOps and pipeline YAML operation tools (including Kustomize and Kpt.) This sort of pipeline-based application management will become a mainstream application management method in the cloud-native ecosystem.

Second, declarative application definition models (such as Open Application Model OAM), declarative CI/CD systems, and pipelines will become a new method of application delivery. For example, traditional Jenkins is an imperative organization method. With the emergence of declarative pipelines and growing popularity of the cloud-native ecosystem and Kubernetes, IaD-based pipelines and next-generation CI/CD systems will also become the mainstream solutions in the industry. These new systems will be fundamentally different from previous CI/CD and pipelines because all operations in the new CI/CD systems will be declarative descriptions. Since they use declarative descriptions, all these operations and the processes in CI/CD can be hosted in GitHub. Even manual approval operations can be hosted in GitHub for auditing and version management.

The emergence of IaD tells us in future cloud-native systems, everything will be an object, and everything will be data. As the number of objects and data increases, object and data management, auditing, and verification will become increasingly complicated. Therefore, we must develop policy engines for these objects and data. Policy engines will be very important in future systems. In the future, all Kubernetes application platforms might need a policy engine to help users process data operation policies in different scenarios.

4. End-User Experience Layers Built on Top of IaD

Note: although IaD will become a mainstream technology at the application layer, this could have a detrimental impact on the end-user experience. The human brain is better at understanding things that follow set processes or rules than static data. Therefore, for ease of understanding, we need to construct an experience layer for end-users above the IaD layer. This means that Kubernetes will not present declarative data directly to end-users, but will first manipulate the data through a dynamic configuration language (DSL) that can understand the Kubernetes data model, an API-object-based CLI or dashboard, or an application-centric interaction and collaboration process. The end-user experience layer determines whether a product can attract and retain users. This will ultimately determine whether cloud-native systems are user-friendly.

5. DevSecOps

With the development of the next-generation DevOps systems described earlier, security will be incorporated into application delivery from the very beginning. The combination of development, security, and operations is called DevSecOps in the industry. This refers to the practice of considering security policies, security issues, and security configuration from the very start of application development. This is different from a traditional approach, which performs post-event security auditing and management after the application is delivered or has been launched.

5. Serverless and Cloud-Native Deployment of Underlying Infrastructure

With the development of cloud-native systems, the value of the cloud is gradually moving to the application layer and systems are evolving toward the use of declarative APIs and IaD. These trends will also change the underlying infrastructure in several ways. First, the infrastructure capabilities will be provided based on declarative APIs and self-help methods. Today, clouds are collections of infrastructure capabilities and can be thought of as infinite capability layers. We currently assume that all infrastructure capabilities can be provided by clouds. This is completely different from the previous view of infrastructure. In the past, both cloud and infrastructure capabilities were weak. Therefore, we had to develop a large middleware system and a sophisticated DevOps system to provide the glue that would connect the infrastructure with application, R&D, and O&M personnel.

In the future, applications will play the leading role in the cloud-native ecosystem. The cloud provides the capabilities that applications need through a standardized access layer, rather than by directly dealing with the infrastructure. The development of the cloud-native ecosystem will lead to a major change from the user perspective. The process will change from infrastructure-oriented to application-oriented. The infrastructure will now be able to provide any capability that users want. In the future, infrastructure will be centered on applications.

4

This concept is similar to serverless. We can call it the serverless-native for the underlying infrastructure. This means that the infrastructure will gradually evolve into declarative APIs, and as a direct result, become self-service infrastructure.

Building a more intelligent infrastructure will become an important means to achieve infrastructure based on declarative APIs and self-service. The conversion of the modular capabilities of the infrastructure system into a data-based definition method allows us to easily drive infrastructure operations through monitoring data and historical data. This is what we mean by a "self-driving infrastructure." Data-driven intelligent infrastructure will become possible in the future, provided that the infrastructure itself implements declarative APIs and self-service capabilities.

At the same time, since the application layer generalizes serverless capabilities, functions, such as scale to 0 and pay-as-you-go, will become a basic assumption of applications. As a result, the resource layer will be able to achieve high scalability and implement unlimited resource pools. As part of intelligent infrastructure, we can implement intelligent scheduling and hybrid deployment to ensure optimal resource utilization and minimize costs.

At the same time, to achieve maximum resource efficiency, the underlying layer must be a Kubernetes-oriented architecture. This will allow for natural integration with Kubernetes. This is reflected in two areas. First, at the runtime layer, such an infrastructure would be better suited for container runtime based on hardware virtualization instead of traditional VMs, such as Kata Container. In this case, ECS Bare Metal Instances would be more suitable hosts. With the development of this technology, lightweight Virtual Machine Managers (VMMs) will become a key technology for the optimization of container runtime and the agility of the overall infrastructure.

Second, the multitenancy control plane would be physically isolated, not just logically isolated, for different tenants. The Kubernetes data model requires strong physical isolation between tenant control planes. This is why we believe that future architectures featuring strong multitenancy will be built based on Kubernetes. Alibaba has observed this trend and we are working to better adapt to the development of serverless-native infrastructure.

The next step in the evolution of cloud computing is cloud-native. The next step in the development of IT architectures is cloud-native architecture. This is why it is a great time to be a developer. Alibaba Cloud will release the Cloud-Native Architecture white paper in July to help developers, architects, and technical decision-makers come together to define and embrace cloud-native.

0 1 1
Share on

Alibaba Cloud Native

206 posts | 12 followers

You may also like

Comments