Kubernetes clusters for distributed Argo workflows (workflow clusters or Serverless Argo Workflows) are deployed on top of a serverless architecture. This cluster type uses elastic container instances to run Argo workflows. It optimizes the performance of the open source Workflow Engine and adjusts cluster configurations for efficient, elastic, and cost-effective scheduling of large-scale workflows. This topic describes the console, benefits, architecture, and network design of workflow clusters.
Console
Distributed Cloud Container Platform for Kubernetes (ACK One) console
Benefits
Developed based on open source Argo Workflows, workflow clusters comply with the standards of open source workflows. If you have Argo workflows running in existing Container Service for Kubernetes (ACK) clusters or Kubernetes clusters, you can seamlessly upgrade the clusters to workflow clusters without the need to modify the workflows.
By using workflow clusters, you can easily manage workflow orchestration and run each workflow step in containers. This builds a high-efficiency continuous integration/continuous deployment (CI/CD) pipeline that allows you to quickly launch a large number of containers for compute-intensive jobs such as machine learning and data processing jobs.
Workflow clusters are developed based on open source Argo Workflows. You can seamlessly upgrade Kubernetes clusters that run Argo workflows to workflow clusters without the need to modify the workflows.
Workflow clusters support fully automated O&M and allow you to focus on workflow development.
Workflow clusters provide high elasticity and auto scaling capabilities to reduce the costs of compute resources.
Workflow clusters support high scheduling reliability and multi-zone load balancing.
Workflow clusters use control planes whose performance, efficiency, stability, and observability are optimized.
Workflow clusters support enhanced OSS management capabilities, such as large object uploading, artifacts garbage collection (GC), and data streaming.
Architecture
Open source Argo Workflows is used by workflow clusters as the workflow engine for serverless workloads in Kubernetes clusters.
Network design
Workflow clusters are available in the following regions: China (Beijing), China (Hangzhou), China (Shanghai), China (Shenzhen), China (Zhangjiakou), China (Heyuan), China (Guangzhou), China (Hong Kong), Singapore (Singapore), Thailand (Bangkok), and Germany (Frankfurt). To use workflow clusters in other regions, join the DingTalk group 35688562 for technical support.
Create a virtual private cloud (VPC) or select an existing VPC.
Create vSwitches or select existing vSwitches.
Make sure that the CIDR blocks of the vSwitches that you use can provide sufficient IP addresses for Argo workflows. Argo workflows may create a large number of pods each of which requests an IP address from the vSwitches that you use.
Create a vSwitch in each zone of the region that you select. When you create a workflow engine, specify multiple vSwitch IDs in the input parameters of the workflow engine. After you create a workflow engine, it automatically creates elastic container instances in zones with sufficient stock of elastic container instances to run a large number of workflows. If elastic container instances are out of stock in all zones in the region that you select, you cannot run workflows because elastic container instances cannot be created.