All Products
Search
Document Center

Container Service for Kubernetes:ACK release notes 2024

Last Updated:Nov 01, 2024

This topic describes the release notes for Container Service for Kubernetes (ACK) and provides links to the relevant references.

Background information

  • The following Kubernetes versions are supported by Container Service for Kubernetes (ACK): 1.31, 1.30, and 1.28. For more information about how ACK supports Kubernetes versions, see Support for Kubernetes versions.

  • The following operating systems are supported by Container Service for Kubernetes (ACK): Container OS, Alibaba Cloud Linux 3, Alibaba Cloud Linux 3 (ARM), Alibaba Cloud Linux (UEFI 3), Windows, Red Hat, and Ubuntu. For more information, see Overview of OS images.

September 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

Support for Kubernetes 1.31

Kubernetes 1.31 is supported. You can create ACK clusters that run Kubernetes 1.31 or update ACK clusters from earlier Kubernetes versions to Kubernetes 1.31.

All regions

Kubernetes 1.31

Deletion protection supported by namespaces and Services

After you enable the policy governance feature, deletion protection can be enabled for namespaces or Services that involve businesses-critical and sensitive data to avoid incurring maintenance costs caused by accidental namespace or Service deletion.

All regions

Related operations: Enable deletion protection for a namespace or a Service

Tracing supported by the NGINX Ingress controller

The trace data of the NGINX Ingress controller can be reported to Managed Service for OpenTelemetry. Managed Service for OpenTelemetry persists the trace data and aggregates and computes the trace data in real time to generate monitoring data, which includes trace details and real-time topology. You can troubleshoot and diagnose issues based on the monitoring data.

All regions

Enable tracing for the NGINX Ingress controller

Cost insights for Knative Services

The cost insights feature of ACK can be enabled for Knative Services. This feature helps the finance department analyze resource usage and allocate costs from multiple dimensions. This feature also offers suggestions on cost savings. You can enable the cost insights feature for a Knative Service. This way, you can view the estimated cost of the Knative Service in real time.

All regions

Enable the cost insights feature in Knative Service

Risk identification based on cost insights for cluster workloads

The cost insights feature can be used to identify risks in cluster workloads. You can enable this feature to quickly identify risks related to stability, performance, and cost in cluster workloads. This feature can track the utilization of cluster resources, provide detailed information about the resource configurations of Burstable pods, and identify risks in BestEffort pods.

All regions

Use cost insights to identify risks for cluster workloads

Spark Operator supported for running Spark jobs

Spark Operator can be used to run Spark jobs in ACK clusters. This helps data engineers quickly and efficiently run and manage big data processing jobs.

All regions

Use Spark Operator to run Spark jobs

Distributed Cloud Container Platform for Kubernetes (ACK One)

Argo CD alerting

Argo CD alerting is supported. The Fleet monitoring feature provided by ACK One uses Managed Service for Prometheus to collect metrics and display monitoring information about Fleet instances on a dashboard. You can customize alert rules to enable real-time monitoring based on custom metrics.

All regions

Configure ACK One Argo CD alerts

Application distribution

The application distribution feature of ACK One is supported. You can use this feature to distribute an application from a Fleet instance to multiple clusters that are associated with the Fleet instance. This feature allows you to configure distribution policies on a fleet instance. You can use the policies to efficiently distribute eligible Kubernetes resources to clusters that match the policies. In addition, you can configure differentiated policies to meet the deployment requirements of different clusters and applications. Compared with GitOps, this distribution method does not require Git repositories.

All regions

Application distribution overview

Access to Alibaba Cloud DNS PrivateZone supported

Access to Alibaba Cloud DNS PrivateZone is supported. Alibaba Cloud DNS PrivateZone is a VPC-based resolution and management service for private domain names. After a virtual border router (VBR), an IPsec-VPN connection, or a Cloud Connect Network (CCN) instance is connected to a transit router, the on-premises networks that are connected to these network instances can use the transit router to access Alibaba Cloud DNS PrivateZone.

All regions

Manage access to Alibaba Cloud DNS PrivateZone

Statically provisioned NAS volumes supported by registered clusters

Statically provisioned NAS volumes can be mounted to registered clusters. File Storage NAS (NAS) is a distributed file system that supports shared access, elastic scaling, high reliability, and high performance. You can mount statically provisioned NAS volumes to registered clusters to persist and share data.

All regions

Mount a statically provisioned NAS volume

Cloud-native AI suite

Auto recovery for FUSE mount targets

During the lifecycle of an application pod, the Filesystem in Userspace (FUSE) daemon may unexpectedly crash. As a result, the application pod can no longer use the FUSE file system to access data. After you enable the auto recovery feature for the mount targets of a FUSE file system, access to application data can be restored without the need to restart the application pods.

All regions

Enable the auto recovery feature for FUSE mount targets

Cross-namespace dataset sharing

Datasets can be shared across namespaces. Fluid supports data access and cache sharing across namespaces. With Fluid, you need to cache your data only once when you need to share data among multiple teams. This greatly improves data utilization efficiency and data management flexibility, and facilitates collaboration between R&D teams.

All regions

Share datasets across namespaces

ACK Edge

ENS management

The ENS management feature is supported. ACK Edge clusters allow you to run containers on Edge Node Service (ENS) nodes. You can manage ENS instances deployed across multiple regions and Internet service providers (ISPs) in a unified and containerized manner. You can create ENS disks and Edge Load Balancer instances to provide cloud-native storage and networking capabilities.

All regions

ENS management overview

Service topology management supported by node pools

Service topology management is supported by node pools. The backend endpoints of Kubernetes Services are randomly distributed across nodes. Consequently, when Service requests are distributed to nodes across node groups, these requests may fail to reach the nodes or may not be answered promptly. You can configure a Service topology to expose an application on an edge node only to the current node or nodes in the same edge node pool.

All regions

Configure a Service topology

August 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

Inventory health status monitoring supported by node instant scaling

Inventory health status monitoring is supported by node instant scaling. The node instant scaling feature can dynamically select instance types and zones based on the inventory status of Elastic Compute Service (ECS) instances. To monitor the inventory health status of the instance types configured for a node pool and obtain suggestions for the instance types, check the ConfigMap for inventory health status. This allows you to assess the inventory health status of the instance types configured for the node pool and proactively analyze and adjust instance types.

All regions

View the health status of node instant scaling

Multiple update frequencies supported by auto cluster update

The following update frequencies are supported by auto cluster update: Latest Patch Version (patch), Second-Latest Minor Version (stable), and Latest Minor Version (rapid).

All regions

Automatically update a cluster

GPU sharing and memory isolation based on MPS

GPU sharing and memory isolation are supported based on Multi-Process Service (MPS). You can use MPS to manage Compute Unified Device Architecture (CUDA) applications that run on multiple NVIDIA GPUs or Message Passing Interface (MPI) requests. This allows you to share GPU resources. You can add specific labels to node pools in the ACK console to enable GPU sharing and GPU memory isolation in MPS mode for AI applications.

All regions

Use MPS for GPU sharing and memory isolation

Knative 1.12.5

Knative 1.12.5-aliyun.7 is supported. This version is compatible with Kourier 1.12 and supports Container Registry Enterprise Edition and the dashboard for preemptible ECS instances.

All regions

Knative release notes

ACK One

Multi-cluster applications

Multi-cluster applications are supported. You can use an Argo CD ApplicationSets to automatically create one or more applications from one orchestration template.

All regions

Create a multi-cluster application

Elastic node pools using custom images supported by registered clusters

Custom images pre-installed with the required software packages can be used to greatly reduce the amount of time required by an on-cloud node to reach the Ready state and accelerate system startup.

All regions

Build an elastic node pool with a custom image

Argo Workflows SDK for Python supported for large-scale workflow creation

Large-scale workflows can be created by using Argo Workflows SDK for Python. A new topic is added to the workflow cluster best practices to describe how to use Argo Workflows SDK for Python to create large-scale workflows. Hera is an Argo Workflows SDK for Python. Hera is an alternative to YAML and provides an easy method to orchestrate and test complex workflows in Python. In addition, Hera is seamlessly integrated with the Python ecosystem and simplifies workflow design.

All regions

Use Argo Workflows SDK for Python to create large-scale workflows

Event-driven CI pipelines can be created based on EventBridge.

Event-driven Continuous Integration (CI) pipelines based on EventBridge are supported. A new topic is added to the workflow cluster best practices to describe how to build event-driven automated CI pipelines. You can build efficient, fast, and cost-effective event-driven automated CI pipelines based on EventBridge and the distributed Argo workflows to simplify and accelerate application delivery.

All regions

Event-driven CI pipelines based on EventBridge

Cloud-native AI suite

Dify supported for creating AI-powered Q&A assistants

Dify can be used to create AI-powered Q&A assistants. Dify is a platform that can integrate enterprise or individual knowledge bases with large language model (LLM) applications. You can use Dify to design customized AI-assisted Q&A solutions and apply the solutions to your business. This helps facilitate business development and management.

All regions

Use Dify to create a customized AI-powered Q&A assistant without coding

Flowise installation and management

The Flowise component can be installed in ACK clusters. A new topic is added to describe how to install and manage Flowise in ACK clusters. The topic also provides answers to some frequently asked questions about Flowise. In most cases, an LLM application is optimized through multiple iterations during the development process. Flowise provides a drag-and-drop UI to enable quick iterations in a low-code manner. This accelerates the transition from the testing environment to the production environment.

All regions

None

TensorRT-LLM supported for deploying Qwen2 models as inference services

TensorRT-LLM can be used to deploy Qwen2 models as inference services. A new topic is added to describe how to use Triton and TensorRT-LLM to deploy a Qwen2 model as an inference service in ACK. In this topic, the Qwen2-1.5B-Instruct model and the A10 GPUs as an example. In this topic, Fluid Dataflow is used to prepare data during the model deployment and Fluid is used to accelerate model loading.

All regions

Use TensorRT-LLM to deploy a Qwen2 model as an inference service

ACK Edge

Cloud-native AI suite

The cloud-native AI suite can be deployed in ACK Edge clusters. The cloud-native AI suite provides AI Dashboard and AI Developer Console to allow you to view the status of your cluster and quickly submit training jobs.

All regions

Deploy the cloud-native AI suite

July 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

Tracing supported by the NGINX Ingress controller

NGINX Ingress controller v1.10.2-aliyun.1 is released and supports the tracing feature by using Managed Service for OpenTelemetry.

All regions

Enable tracing for the NGINX Ingress controller

Global network policies supported by Poseidon

Poseidon is a component that supports network policies for ACK clusters. Poseidon v0.5.0 introduces cluster-level global network policies, which allow you to manage network connectivity across namespaces.

All regions

Use ACK GlobalNetworkPolicy

Release of ContainerOS 3.3

ContainerOS is an operating system provided by Alibaba Cloud. It is vertically optimized for container scenarios. ContainerOS provides enhanced security, faster startup, and simplified system services and software packages. The kernel version of ContainerOS 3.3 is updated to 5.10.134-17.0.2.lifsea8. By default, cgroup v2 is used to isolate container resources. Vulnerabilities and defects are fixed.

All regions

Release notes for ContainerOS images

Custom worker RAM roles for node pools

By default, a Container Service for Kubernetes (ACK) managed cluster creates a default worker Resource Access Management (RAM) role shared by all nodes. If you authorize an application using the worker RAM role, the permissions are shared among all nodes in the cluster, which may unintentionally grant more permissions than necessary. You can assign custom worker RAM roles to different node pools upon creation to reduce the potential risk of sharing a worker RAM role among all nodes in the cluster.

All regions

Use custom worker RAM roles

New security policy added to the security policy library

The ACKBlockVolumeTypes policy is added. You can use this policy to specify the volumes that cannot be used by pods in the specified namespaces.

All regions

ACKBlockVolumeTypes

New NVIDIA GPU driver version

NVIDIA GPU driver 550.90.07 is supported.

All regions

NVIDIA driver versions supported by ACK

Best practices for using LMDeploy to deploy a Qwen model as an inference service

The Qwen1.5-4B-Chat model and A10 GPU are used to demonstrate how to use the LMDeploy framework to deploy the Qwen model as an inference service in ACK.

All regions

Use LMDeploy to deploy the Qwen model inference service

Best practices for using KServe to deploy inference services that share a GPU

In some scenarios, you may want multiple inference tasks to share the same GPU to improve GPU utilization. The Qwen1.5-0.5B-Chat model and the V100 GPU are used to describe how to use KServe to deploy inference services that share a GPU.

All regions

Deploy inference services that share a GPU

Distributed Cloud Container Platform for Kubernetes (ACK One)

Best practices for event-driven CI pipelines based on EventBridge

You can build efficient, fast, and cost-effective event-driven automated CI pipelines based on EventBridge and the distributed Argo workflows to significantly simplify and accelerate application delivery.

All regions

Event-driven CI pipelines based on EventBridge

Multi-cluster application orchestration through GitOps

You can orchestrate multi-cluster applications in the GitOps console and use Git repositories as application sources to implement version management, multi-cluster distribution, and Continuous Deployment (CD) for applications that use multiple orchestration methods, such as YAML manifests, Helm charts, and Kustomize.

All regions

Use an ApplicationSet to create multiple applications

Elastic node pools using custom images in registered clusters

Custom images pre-installed with the required software packages can be used to greatly reduce the amount of time required by an on-cloud node to reach the Ready state and accelerate system startup.

All regions

Build an elastic node pool with a custom image

Cloud-native AI suite

Filesystem in Userspace (FUSE) mount target auto repair

Fluid supports polling check and periodic automatic repair for FUSE mount targets to improve the stability of access to business data.

All regions

N/A

ACK Edge

Support for Kubernetes 1.28

You can create ACK Edge clusters that run Kubernetes 1.28.9-aliyun.1.

All regions

Release notes for ACK Edge of Kubernetes 1.28

Support for the Container Storage Interface (CSI) plug-in

This topic describes the types of storage medium supported by the volume plug-ins for ACK Edge clusters and the limits of the volume plug-ins based on node types and integration methods.

All regions

Storage overview

Support for the cloud-native AI suite

ACK Edge clusters support all features of the cloud-native AI suite in on-cloud environments, but some features are not supported in on-premises environments. The capabilities and limits of the cloud-native AI suite supported by different node types and network types are different.

All regions

Overview of the cloud-native AI suite

Best practices for using Ingresses

This topic describes the usage notes for deploying an Ingress controller in an edge node pool and the differences between deploying an Ingress controller in an on-cloud node pool and deploying an Ingress controller in an edge node pool.

All regions

June 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

Support for Kubernetes 1.30

Kubernetes 1.30 is supported. You can create ACK clusters that run Kubernetes 1.30 or update ACK clusters from earlier Kubernetes versions to Kubernetes 1.30.

All regions

Node pool OS parameter customization

If the default parameter settings of the node OS, such as Linux, do not meet your business requirements, you can customize the OS parameters of your node pools to improve the OS performance.

All regions

Customize the OS parameters of a node pool

Support for Ubuntu

Ubuntu 22.04 is supported. You can use Ubuntu 22.04 as the node OS of ACK clusters that run Kubernetes 1.30 or later.

All regions

Overview of OS images

Descheduling enhanced

Descheduling is a process of evicting specific pods from one node to another node. In scenarios where the resource utilization among nodes is imbalanced, nodes are overloaded, or new scheduling policies are required, you can use the scheduling feature to resolve issues or meet your requirements. The Koordinator Descheduler module of the ack-koordinator component is enhanced in terms of the following capabilities: descheduling policies, pod eviction methods, and eviction traffic control.

All regions

Network Load Balancer (NLB) instances configurable by using Services in the ACK console.

Services can be created and managed in the ACK console to configure Network Load Balancer (NLB) instances. NLB is a Layer 4 load balancing service intended for the Internet of Everything (IoE). NLB offers ultra-high performance and can automatically scale on demand. An NLB instance supports up to 100 million concurrent connections.

All regions

New release of csi-provisioner

csi-provisioner allows you to automatically create volumes. A new version of csi-provisioner is released and the managed version of csi-provisioner, which does not consume node resources, is also released. File Storage NAS (NAS) file systems can be mounted on Alibaba Cloud Linux 3 by using the Transport Layer Security (TLS) protocol. Ubuntu nodes are supported by csi-provisioner.

All regions

csi-provisioner

ACK One

Fleet monitoring enhanced

Fleet monitoring is supported by ACK One and global monitoring for clusters associated with Fleet instances is enhanced. A dashboard is provided to display monitoring information about the Fleet instances, including metrics of key components and the GitOps system. The global monitoring feature collects metrics from different clusters and displays global monitoring information and cost insights data about these clusters on a dashboard.

All regions

Fleet monitoring

Cloud-native AI suite

Cloud-native AI suite free of charge

The cloud-native AI suite is free of charge. You can use all features provided by the cloud-native AI suite to build customized AI production systems on ACK and implement full-stack optimizations for AI and machine learning (ML) applications and systems. The cloud-native AI suite allows you to experience the benefits of the cloud-native AI technology and helps facilitate business innovation and intelligent transformation.

All regions

[Free Component Notice] Cloud-native AI suite is free of charge

ACK Edge

Disk storage supported by on-cloud node pools

The CSI used in ACK managed clusters is supported by ACK Edge clusters. The CSI component installed in on-cloud node pools of ACK Edge clusters provides the same features as the CSI component installed in ACK managed clusters. You can mount disks to on-cloud node pools by configuring persistent volumes (PVs) and persistent volume claims (PVCs).

All regions

None

Access to workloads in data centers supported by using Express Connect circuits

Computing devices in data centers and edge devices can be connected to ACK. The API server of an ACK cluster can use Express Connect circuits to access pods or Services deployed at the edge. This feature is implemented based on the edge controller manager (ECM). The ECM is responsible for automating routing configuration for access from VPCs to pods deployed at the edge.

All regions

Network management overview

May 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

Reuse of NLB instances across VPCs supported by cloud-controller-manager

cloud-controller-manager v2.9.1 is released, which supports reuse of NLB instances across VPCs and NLB server group weights. It supports scenarios where an NLB instance is connected to both ECS instances and pods. This version also optimizes support for NLB IPv6.

All regions

Cloud Controller Manager

Custom routing rules visualized for ALB Ingresses

Custom routing rules can be created in a visualized manner for Application Load Balancer (ALB) Ingresses. You can specify routing conditions to route requests based on paths, domain names, and request headers, and specify actions to route requests to specific Services or return fixed responses.

All regions

Customize the routing rules of an ALB Ingress

NVMe disk multi-instance mounting and reservation

An NVMe disk can be mounted to multiple instances and the reservation feature is supported. You can mount an NVMe disk to at most 16 instances and further use the reservation feature that complies with the NVMe specifications. These features help ensure data consistency for applications such as databases and enable you to perform failovers much faster.

All regions

Use the multi-attach and NVMe reservation features of NVMe disks

ossfs version switching by using the feature gate

In CSI 1.30.1 and later, you can enable the corresponding feature gate to switch to ossfs 1.91 or later to improve the performance of file systems. If you require high file system performance, we recommend that you switch the ossfs version to 1.91 or later.

All regions

ACK One

CI pipelines created based on workflow clusters for Golang projects

Distributed Cloud Container Platform for Kubernetes (ACK One) workflow clusters are developed based on open source Argo Workflows. With the benefits of ultrahigh elasticity, auto scaling, and zero O&M costs, hosted Argo Workflows can help you quickly create CI pipelines with low costs. This best practice describes how to create CI pipelines for Golang projects based on workflow clusters.

All regions

Create CI pipelines for Golang projects in workflow clusters

Cloud-native AI suite

Dataset mount target dynamic mounting supported by Fluid

Fluid supports dynamically mounting dataset mount target. Fluid can automatically update and dynamically mount the dataset mount target corresponding to the PV and PVC within the container.

All regions

N/A

April 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

Anomaly diagnostics for ACK clusters supported by ACK AI Assistant

Anomaly diagnostics for ACK clusters is supported by ACK AI Assistant. You can use ACK AI assistant to analyze and diagnose failed tasks, error logs, and component update failures in ACK clusters. This simplifies your O&M work.

All regions

Use ACK AI Assistant to help troubleshoot issues and find answers to your questions

RRSA authentication for OSS volumes

RAM Roles for Service Accounts (RRSA) authentication can be configured for PVs to limit the permissions to perform API operations on specific Object Storage Service (OSS) volumes. This enables you to regulate access to cloud resources in fine-grained manner and enhance cluster security.

All regions

Use RRSA authentication to mount a statically provisioned OSS volume

EIPs with Anti-DDoS (Enhanced) enabled for pods

ACK Extend Network Controller v0.9.0 can create and manage VPC resources such as NAT gateways and elastic IP addresses (EIPs), and bind EIPs with Anti-DDoS (Enhanced Edition) enabled to pods. This version is suitable for scenarios where you want to enable Anti-DDoS protection for pods that are exposed to the Internet.

All regions

Associate an exclusive EIP with a pod

New predefined security policies added to policy governance

The following predefined security policies are added to the policy governance module: ACKServicesDeleteProtection,

ACKPVSizeConstraint,

and ACKPVCConstraint.

All regions

Predefined security policies of ACK

ACK Edge

Offline O&M tool for edge nodes

In most cloud-edge collaboration scenarios, edge nodes are usually offline due to network instability. When a node is offline, you cannot perform O&M operations on the node, such as business updates and configuration changes. ACK Edge clusters provide the offline O&M tool that you can use to perform O&M operations on edge nodes in emergency scenarios.

All regions

Offline O&M tool for edge nodes

ACK One

Multi-cluster gateway visualized management

Microservices Engine (MSE) cloud-native gateways can serve as multi-cluster gateways based on the MSE Ingress controller hosted in ACK One. MSE cloud-native gateways allow you to view topologies in a visualized manner and create MSE Ingresses to manage north-south traffic. You can use MSE cloud-native gateways to implement active zone-redundancy, multi-cluster load balancing, and traffic routing to specific clusters based on request headers.

All regions

Manage multi-cluster gateways

Access from Kubernetes clusters for distributed Argo workflows to OSS optimized

A variety of features are added to ACK One Argo Workflows, including multipart uploading for ultra-large files, artifact auto garbage collection, and artifact transmission in streaming mode. These features allow you to manage OSS objects in an efficient and secure manner.

All regions

Configure artifacts

Cloud-native AI suite

Quick MLflow deployment in ACK clusters

MLflow can be deployed in ACK clusters with a few clicks. You can use MLflow to track model training, and manage and deploy machine learning models. The cloud-native AI suite also supports lifecycle management for models in MLflow Model Registry.

All regions

March 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

Kubeconfig file deletion and kubeconfig recycle bin supported

Alibaba Cloud accounts, RAM users, or RAM roles with certain permissions can be used to view and manage the status of issued kubeconfig files. You can delete kubeconfig files or revoke permissions provided by kubeconfig files that may pose security risks. You can also use the kubeconfig recycle bin to restore kubeconfig files that are deleted within the previous 30 days.

All regions

GPU device isolation

In ACK exclusive GPU scheduling scenarios, ACK provides a mechanism to allow you to isolate a faulty device on a GPU node in order to avoid scheduling new GPU devices to the faulty device.

All regions

GPU Device Plugin-related operations

Practices for collecting the metrics of the specified virtual node

In a cluster that has multiple virtual nodes, you can specify a virtual node to collect its metrics. This reduces the amount of data collected at a time. When a large number of containers are deployed on virtual nodes, this solution can efficiently reduce the loads of the monitoring trace.

All regions

Collect the metrics of the specified virtual node

February 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

ACK Virtual Node 2.11.0 released

ACK Virtual Node 2.11.0 supports Windows instances and its scheduling semantics support Windows nodes. This version also allows you to enable the System Operations & Maintenance (SysOM) feature for elastic container instances to monitor resources such as the kernel. In addition, certificates can be generated more efficiently during the creation of Elastic Container Instance-based pods.

All regions

Distributed Container Platform ACK One

Knative supported by registered clusters

Knative is a Kubernetes-based serverless framework. The purpose of Knative is to create a cloud-native and cross-platform orchestration standard for serverless applications. Knative integrates container builds (or functions), workload management (dynamic scaling), and event models. Knative helps you build an enterprise-class serverless container platform to deploy and manage modern serverless workloads.

All regions

Knative overview

ACK One-based zone-disaster recovery in hybrid cloud environments

If your businesses run in Kubernetes clusters in data centers or on third-party public clouds and you want to use cloud computing to implement zone-disaster recovery for business high availability, you can use ACK One. ACK One allows you to centrally manage traffic, applications, and clusters, route traffic across clusters, and seamlessly perform traffic failovers.

ACK One uses the managed MSE Ingress controller to manage Microservices Engine (MSE) cloud-native gateways that serve as multi-cluster gateways and uses the Ingress API to define traffic routing rules. ACK One can manage Layer 7 north-south traffic in multi-cloud, multi-cluster, and hybrid cloud scenarios. Compared with traditional DNS-based solutions, the zone-disaster recovery system developed based on ACK One multi-cluster gateways reduces the architecture complexity, usage costs, and management costs. It also supports millisecond-level seamless migration and Layer 7 routing.

All regions

Use ACK One to implement zone-disaster recovery in hybrid cloud environments

Support for AI scenarios improved and access acceleration to objects in OSS bucket by using Fluid supported

Fluid is an open source, Kubernetes-native distributed dataset orchestrator and accelerator for data-intensive applications in cloud-native scenarios, such as big data applications and AI applications. You can use Fluid to accelerate access to OSS files in registered clusters.

All regions

Use Fluid to accelerate access to OSS objects

DingTalk chatbots for receiving notifications about GitOps application updates

In the multi-cluster GitOps continuous delivery scenario, the high availability deployment of applications and multi-cluster distribution of system components enable you to use a diversity of notification services. In this case, you can use a DingTalk chatbot to receive notifications about GitOps application updates.

All regions

Use a DingTalk chatbot to receive notifications about GitOps application updates

Cloud-native AI suite

Best practices for Ray clusters

You can quickly create a Ray cluster in an ACK cluster and integrate the Ray cluster with Simple Log Service, Managed Service for Prometheus, and ApsaraDB for Redis to optimize log management, observability, and availability. The Ray autoscaler can work with the ACK autoscaler to improve the efficiency of computing resource scaling and increase resource utilization.

All regions

Best practices for Ray clusters

January 2024

Product

Feature

Description

Region

References

Container Service for Kubernetes

ACK AI Assistant released

Container Service for Kubernetes (ACK) AI Assistant is developed by the ACK team based on a large language model. ACK AI Assistant is empowered by technology expertise and years of experience of the ACK team in the Kubernetes and cloud-native technology sectors, the observability of the ACK O&M system, and rich experience provided by experts in ACK diagnostics. It can help you find answers to your questions and diagnose issues related to ACK and Kubernetes based on the large language model.

All regions

Use ACK AI Assistant to help troubleshoot issues and find answers to your questions

OS kernel-level container monitoring capabilities available

Alibaba Cloud provides the Tracing Analysis service that offers developers of distributed applications various features, such as trace mapping, request statistics, and trace topology. The Tracing Analysis service helps you quickly analyze and diagnose performance bottlenecks in a distributed application architecture and improves the efficiency of development and diagnostics for microservices applications. You can install the Application Load Balancer (ALB) Ingress controller and enable the Xtrace feature in a cluster. After the Xtrace feature is enabled, you can view the tracing data.

All regions

Use AlbConfigs to enable Tracing Analysis based on Xtrace

ACK Edge

Support for Kubernetes 1.26

Kubernetes 1.26 is released for ACK Edge clusters. This version optimizes and adds features, such as edge node autonomy and edge node access.

All regions

Release notes for ACK Edge of Kubernetes 1.26

Cloud-edge communication solution updated

ACK Edge clusters that run Kubernetes 1.26 and later support network communication between the on-cloud node pools and edge node pools. Compared with the original solution, the updated solution provides high availability, auto scaling, and cloud-edge container O&M. Raven provides the proxy mode and tunnel mode for cloud-edge communication. The proxy mode allows cross-domain HTTP communication among hosts, and the tunnel mode allows cross-domain communication among containers.

ACK One

Access to the GitOps console through a custom domain name

To access the GitOps console of Distributed Cloud Container Platform for Kubernetes (ACK One) through a custom domain name, you can create a CNAME record to map the custom domain name to the default domain name of GitOps, and configure an SSL certificate. Then, you can use a CloudSSO account to access the GitOps console through https://${your-domain}.

All regions

Access the GitOps console through a custom domain name

Disaster recovery architectures and solutions based on Kubernetes clusters

This practice combines Kubernetes clusters (including Container Service for Kubernetes clusters, clusters on third-party cloud platforms, and clusters in data centers) with networking, database, middleware, and observability cloud services of Alibaba Cloud to help you design disaster recovery architectures and solutions. This allows you to build a more resilient business system.

All regions

Disaster recovery architectures and solutions based on Kubernetes container clusters

Historical releases

To view the historical release notes for ACK, see Historical release notes (before 2024).