By Sun Jianbo (Tianyuan), Alibaba Cloud technical Expert. Tianyuan is one of the main formulators of the OAM specification and is committed to promoting the standardization of cloud-native applications. He is also engaged in the delivery and management of large-scale cloud-native applications at Alibaba.
After nearly three months of iteration, the Open Application Model (OAM) specification (spec) finally ushered in the v1alpha2 version. While adhering to the platform-agnostic characteristic of the OAM spec, the new version is more Kubernetes-friendly. It balances standards with scalability to a large extent and effectively supports Custom Resource Definitions (CRDs). If you have an existing CRD Operator, just connect it to the OAM system and leverage the benefits of OAM.
Currently, OAM has become the core architecture for many companies, including Alibaba, Microsoft, Upbond, and Harmony Cloud, to build cloud products. They built an application-oriented and user-friendly Kubernetes PaaS system by using OAM. Based on the OAM standards and scalability, they have implemented the core OAM Controller and accessed the existing operator capabilities. By horizontally connecting multiple modules through OAM, they broke through the dilemma where original operators are isolated from each other and cannot be reused.
Let's get to the point and take a look at what changes are incorporated in the v1alpha2 version.
This section enlists the major changes in detail. Before deep-diving into the changes, let's take a quick look at the various key terms used in the following sections.
Note: For related details, refer to the upstream OAM Spec Github repository.
The original mode of v1alpha1 is as follows:
// 老版本,仅对比使用
apiVersion: core.oam.dev/v1alpha1
kind: WorkloadType
metadata:
name: OpenFaaS
annotations:
version: v1.0.0
description: "OpenFaaS a Workload which can serve workload running as functions"
spec:
group: openfaas.com
version: v1alpha2
names:
kind: Function
singular: function
plural: functions
workloadSettings: |
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": [
"name", "image"
],
"properties": {
"name": {
"type": "string",
"description": "the name to the function"
},
"image": {
"type": "string",
"description": "the docker image of the function"
}
}
}
In the original mode, group, version, and kind are fields, and the spec verification is represented by JSON schema. The overall format is actually similar to the CRD, but not entirely consistent.
In the v1alpha2 version, a reference model is introduced to describe a reference relationship through WorkloadDefinition, TraitDefinition, and ScopeDefinition. Just directly reference a CRD and the name is the name of the CRD. For non-Kubernetes implementations of OAM, this name is an index. Find a verification file similar to a CRD. The verification file contains apiVersion, kind, and corresponding schema verification.
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: containerisedworkload.core.oam.dev
spec:
definitionRef:
# Name of CRD.
name: containerisedworkload.core.oam.dev
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: manualscalertrait.core.oam.dev
spec:
appliesToWorkloads:
- containerizedworkload.core.oam.dev
definitionRef:
name: manualscalertrait.core.oam.dev
apiVersion: core.oam.dev/v1alpha2
kind: ScopeDefinition
metadata:
name: networkscope.core.oam.dev
spec:
allowComponentOverlap: true
definitionRef:
name: networkscope.core.oam.dev
Note:
1) For Kubernetes implementations of OAM, this name is the name of the CRD in Kubernetes and consists of <plural-kind>
.<group>
. According to the best practice in the community, a CRD has only one version running in a cluster. Generally, new versions are forward compatible and are upgraded to the latest version all at once. If two versions do exist at the same time, choose further by using kubectl get crd <name>
.
2) The Definition layer is not oriented to end-users and is mainly used for platform implementations. For non-Kubernetes implementations, if multiple versions exist, the implementation platform of OAM shows different versions to end-users.
In the original mode, at the Workload and Trait levels, we pulled out just the spec part of a CR and put it in the workloadSettings and properties fields, respectively. This method "deduces" the Kubernetes CR. However, it doesn't help access CRDs in the Kubernetes ecosystem. The spec must be redefined in a different format.
// 老版本,仅对比使用
apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
name: rediscluster
spec:
workloadType: cache.crossplane.io/v1alpha1.RedisCluster
workloadSettings:
engineVersion: 1.0
region: cn
// 老版本,仅对比使用
apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
name: custom-single-app
annotations:
version: v1.0.0
description: "Customized version of single-app"
spec:
variables:
components:
- componentName: frontend
instanceName: web-front-end
parameterValues:
traits:
- name: manual-scaler
properties:
replicaCount: 5
Once the CR is directly embedded, the complete CR description appears below the workload and trait fields.
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: example-server
spec:
prameters:
- name: xxx
fieldPaths:
- "spec.osType"
workload:
apiVersion: core.oam.dev/v1alpha2
kind: Server
spec:
osType: linux
containers:
- name: my-cool-server
image:
name: example/very-cool-server:1.0.0
ports:
- name: http
value: 8080
env:
- name: CACHE_SECRET
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
name: cool-example
spec:
components:
- componentName: example-server
traits:
- trait:
apiVersion: core.oam.dev/v1alpha2
kind: ManualScalerTrait
spec:
replicaCount: 3
The benefits of this change are obvious. Let's take a look at the key benefits:
1) It is easy to access CRDs in the existing Kubernetes system, and even Kubernetes-native Deployment (accessed as a custom workload) and other resources.
2) The field definitions at the Kubernetes CR level are mature, and parsing and verification are left to the CRD system. Here, it's visible that the structure of traits is []trait{CR} instead of []CR, with a seemingly useless trait field added, mainly for two reasons:
For OAM, it is an important feature that developers reserve fields for the O&M personnel to overwrite them.
As reflected in the process of the OAM spec, developers define parameters in Component, and the O&M personnel overwrites corresponding parameters through parameterValues in ApplicationConfiguration (AppConfig).
In the initial parameter passing, the fromParam field follows each field. This method doesn't cover all scenarios when custom schemas are supported.
// 老版本,仅对比使用
apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
name: rediscluster
spec:
workloadType: cache.crossplane.io/v1alpha1.RedisCluster
parameters:
- name: engineVersion
type: string
workloadSettings:
- name: engineVersion
type: string
fromParam: engineVersion
Later the following scheme was proposed:
// 老版本,仅对比使用
apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
name: rediscluster
spec:
workloadType: cache.crossplane.io/v1alpha1.RedisCluster
parameters:
- name: engineVersion
type: string
workloadSettings:
engineVersion: "[fromParam(engineVersion)]"
The biggest problem with this scheme is that static IaD (Infrastructure as Data) is added with dynamic functions, which complicates understanding and usage.
After many discussions, in the new scheme, we describe the positions of to-be-injected parameters in the form of JsonPath, which ensures that AppConfig is static in users' understanding.
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: example-server
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: Server
spec:
containers:
- name: my-cool-server
image:
name: example/very-cool-server:1.0.0
ports:
- name: http
value: 8080
env:
- name: CACHE_SECRET
value: cache
parameters:
- name: instanceName
required: true
fieldPaths:
- ".metadata.name"
- name: cacheSecret
required: true
fieldPaths:
- ".workload.spec.containers[0].env[0].value"
fieldPaths is an array, wherein each element defines a parameter and field in the corresponding workload.
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
name: my-app-deployment
spec:
components:
- componentName: example-server
parameterValues:
- name: cacheSecret
value: new-cache
In AppConfig, parameterValues is still used to overwrite parameters in Component.
Originally, the concept of component was called ComponentSchematic. The main reason for such a naming convention is that this concept contained some syntax descriptions and choices. For example, the Core Workload (containers) and extended Workload (workloadSettings) are written differently. If this is the case, containers define specific parameters, whereas workloadSettings is more like schema (how parameters are filled in).
The workloadSettings of v1alpha1 also include type and description and therefore seem more ambiguous.
// 老版本,仅对比使用
apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
name: rediscluster
spec:
containers:
...
workloadSettings:
- name: engineVersion
type: string
description: engine version
fromParam: engineVersion
...
In v1alpha2, the concept of the component was changed to Component, which is explicitly an instance of Workload. All syntax definitions are given by the actual CRD referenced in WorkloadDefinition.
In a Kubernetes implementation, WorkloadDefinition refers to the CRD and Component.spec.workload refers to the instance CR for which the CRD is written.
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: example-server
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: Server
spec:
...
Scopes in v1alpha1 were created by AppConfigs. As per the example, a scope is also essentially a CR and can be "inferred" to create a CR. However, scopes are positioned to accommodate components from different AppConfigs, and a scope is not an App itself. Therefore, it is always inappropriate to use an AppConfig to create a scope.
// 老版本,仅对比使用
apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
name: my-vpc-network
spec:
variables:
- name: networkName
value: "my-vpc"
scopes:
- name: network
type: core.oam.dev/v1alpha1.Network
properties:
network-id: "[fromVariable(networkName)]"
subnet-ids: "my-subnet1, my-subnet2"
The v1alpha2 version uses CRs to correspond to instances. To make the concept of Scope clearer and more convenient to correspond to different types of Scope, we take out Scope and to create it directly from the CR corresponding to the CRD that is defined by ScopeDefinition. Refer to the following examples:
apiVersion: core.oam.dev/v1alpha2
kind: ScopeDefinition
metadata:
name: networkscope.core.oam.dev
spec:
allowComponentOverlap: true
definitionRef:
name: networkscope.core.oam.dev
apiVersion: core.oam.dev/v1alpha2
kind: NetworkScope
metadata:
name: example-vpc-network
labels:
region: us-west
environment: production
spec:
networkId: cool-vpc-network
subnetIds:
- cool-subnetwork
- cooler-subnetwork
- coolest-subnetwork
internetGatewayType: nat
Use scope references in an AppConfig as shown below:
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
name: custom-single-app
annotations:
version: v1.0.0
description: "Customized version of single-app"
spec:
components:
- componentName: frontend
scopes:
- scopeRef:
apiVersion: core.oam.dev/v1alpha2
kind: NetworkScope
name: my-vpc-network
- componentName: backend
scopes:
- scopeRef:
apiVersion: core.oam.dev/v1alpha2
kind: NetworkScope
name: my-vpc-network
Variables are included in v1alpha1 to reduce redundancy by opening source references to some public variables in AppConfigs. Therefore, the variable list is added. In practice, however, the reduced redundancy does not significantly reduce the complexity of the OAM spec. On the contrary, increasing dynamic functions significantly increases complexity.
On the other hand, functions such as fromVariable can be entirely implemented by using helm template/kustomiz and other tools, which render the complete OAM spec for use.
Therefore, the variable list and related fromVariable are removed here, which does not affect any features.
// 老版本,仅对比使用
apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
name: my-app-deployment
spec:
variables:
- name: VAR_NAME
value: SUPPLIED_VALUE
components:
- componentName: my-web-app-component
instanceName: my-app-frontent
parameterValues:
- name: ANOTHER_PARAMETER
value: "[fromVariable(VAR_NAME)]"
traits:
- name: ingress
properties:
DATA: "[fromVariable(VAR_NAME)]"
Now, WorkloadDefinition defines Workload and Component becomes an instance, so the original six core workloads actually become the same WorkloadDefinition with identical field descriptions. The only difference being the constraints and demands on traits are different. Therefore, the spec of the original six core workloads was changed to a Workload type named ContainerizedWorkload.
Meanwhile, we plan to allow developers to express their demands on O&M policies by adding the concept of policy. Hence, Component developers now express which traits they want to add.
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: containerizedworkloads.core.oam.dev
spec:
definitionRef:
name: containerizedworkloads.core.oam.dev
Refer to the following example to understand how to use ContainerizedWorkload:
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: frontend
annotations:
version: v1.0.0
description: "A simple webserver"
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
metadata:
name: sample-workload
spec:
osType: linux
containers:
- name: web
image: example/charybdis-single:latest@@sha256:verytrustworthyhash
resources:
cpu:
required: 1.0
memory:
required: 100MB
env:
- name: MESSAGE
value: default
parameters:
- name: message
description: The message to display in the web app.
required: true
type: string
fieldPaths:
- ".spec.containers[0].env[0].value"
Q) What do I need to do to transform our original platform for OAM implementation?
For the application management platforms originally on Kubernetes, the transformation goes through two phases:
Q) What changes do the existing CRD Operator must make to access OAM?
The existing CRD Operator smoothly connects to the OAM system, for example, as an independent extended workload. However, to let end users better understand the advantages of OAM's separation of concerns, we strongly recommend separating the CRD Operator into different CRDs based on different concerns of development and O&M. The CRD concerned by developers accesses OAM as Workload, and the CRD concerned by the O&M personnel accesses OAM as Trait.
The OAM spec and model have solved many existing problems, but the journey for OAM has just begun. OAM is a neutral open-source project. We welcome more people to join us in defining the future of the delivery of cloud-native applications.
Participation: Refer to the following links to contribute.
Open Application Model: Carving building blocks for Platforms
508 posts | 48 followers
FollowAlibaba Developer - May 27, 2020
Alibaba Developer - February 1, 2021
Alibaba Developer - October 25, 2019
Alibaba Cloud Native Community - March 8, 2023
Alibaba Developer - March 16, 2021
Alibaba Cloud Native Community - November 11, 2022
508 posts | 48 followers
FollowAccelerate and secure the development, deployment, and management of containerized applications cost-effectively.
Learn MoreAlibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.
Learn MoreProvides a control plane to allow users to manage Kubernetes clusters that run based on different infrastructure resources
Learn MoreA secure image hosting platform providing containerized image lifecycle management
Learn MoreMore Posts by Alibaba Cloud Native Community