All Products
Search
Document Center

Container Service for Kubernetes:Use Terraform to create an ACK managed cluster

Last Updated:Dec 20, 2024

This topic describes how to use Terraform to create an ACK managed cluster.

Note

You can run the sample code in this topic with one click in OpenAPI Portal.

Prerequisites

  • Container Service for Kubernetes (ACK) is activated. If you want to use Terraform to activate ACK, refer to Activate ACK and assign default roles to ACK.

  • By default, an Alibaba Cloud account has full permissions on all resources that belong to this account. Security risks may arise if the credentials of an Alibaba Cloud account are leaked. We recommend that you use Resource Access Management (RAM) users to manage resources. When you create a RAM user, you need to create an AccessKey pair for the RAM user. For more information, see Create a RAM user and Create an AccessKey pair.

  • The following policy is attached to the RAM user that you use to run commands in Terraform. The policy includes the minimum permissions required to run commands in Terraform. For more information, see Grant permissions to a RAM user.

    This policy allows Resource Access Management (RAM) users to create, view, and delete virtual private clouds (VPCs), vSwitches, and Container Service for Kubernetes (ACK) clusters.

    {
      "Version": "1",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "vpc:CreateVpc",
            "vpc:CreateVSwitch",
            "cs:CreateCluster",
            "vpc:DescribeVpcAttribute",
            "vpc:DescribeVSwitchAttributes",
            "vpc:DescribeRouteTableList",
            "vpc:DescribeNatGateways",
            "cs:DescribeTaskInfo",
            "cs:DescribeClusterDetail",
            "cs:GetClusterCerts",
            "cs:CheckControlPlaneLogEnable",
            "cs:CreateClusterNodePool",
            "cs:DescribeClusterNodePoolDetail",
            "cs:ModifyClusterNodePool",
            "vpc:DeleteVpc",
            "vpc:DeleteVSwitch",
            "cs:DeleteCluster",
            "cs:DeleteClusterNodepool"
          ],
          "Resource": "*"
        }
      ]
    }
  • The runtime environment for Terraform is prepared by using one of the following methods:

    • Use Terraform in Terraform Explorer: Alibaba Cloud provides an online runtime environment for Terraform. You can log on to the environment to use Terraform without the need to install Terraform. This method is suitable for scenarios where you need to use and debug Terraform in a low-cost, efficient, and convenient manner.

    • Use Terraform in Cloud Shell: Cloud Shell is preinstalled with Terraform and configured with your identity credentials. You can run Terraform commands in Cloud Shell. This method is suitable for scenarios where you need to use and access Terraform in a low-cost, efficient, and convenient manner.

    • Install and configure Terraform on your on-premises machine: This method is suitable for scenarios where network connections are unstable or a custom development environment is needed.

    Important

    You must install Terraform 0.12.28 or later. You can run the terraform --version command to query the Terraform version.

Resources in use

Note

Fees are generated for specific resources in this example. Release or unsubscribe from the resources when you no longer need them.

Use Terraform to create an ACK managed cluster (Terway)

In this example, an ACK managed cluster is created that contains a regular node pool, a managed node pool, and a node pool for which auto scaling is enabled. By default, the following add-ons are installed in the cluster by default: Terway (network add-on), csi-plugin (volume add-on), csi-provisioner (volume add-on), logtail-ds (logging add-on), Nginx Ingress controller, ack-arms-prometheus (monitoring add-on), and ack-node-problem-detector (node diagnostics add-on).

  1. Create a working directory and a file named main.tf under the directory. Then, copy the following content to the main.tf file.

    provider "alicloud" {
      region = var.region_id
    }
    
    variable "region_id" {
      type    = string
      default = "cn-shenzhen"
    }
    
    variable "cluster_spec" {
      type        = string
      description = "The cluster specifications of kubernetes cluster,which can be empty. Valid values:ack.standard : Standard managed clusters; ack.pro.small : Professional managed clusters."
      default     = "ack.pro.small"
    }
    
    # Specify the zones of vSwitches. 
    variable "availability_zone" {
      description = "The availability zones of vswitches."
      default     = ["cn-shenzhen-c", "cn-shenzhen-e", "cn-shenzhen-f"]
    }
    
    # Specify vSwitch IDs. 
    variable "node_vswitch_ids" {
      description = "List of existing node vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    # This variable specifies the CIDR blocks for creating vSwitches. 
    variable "node_vswitch_cidrs" {
      description = "List of cidr blocks used to create several new vswitches when 'node_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.0.0/23", "172.16.2.0/23", "172.16.4.0/23"]
    }
    
    # Specify the Terway configurations. If you leave this variable empty, new Terway vSwitches are created based on the value of the terway_vswitch_cidrs variable. 
    variable "terway_vswitch_ids" {
      description = "List of existing pod vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    # This variable specifies the CIDR blocks in which Terway vSwitches are created when terway_vswitch_ids is not specified. 
    variable "terway_vswitch_cidrs" {
      description = "List of cidr blocks used to create several new vswitches when 'terway_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.208.0/20", "172.16.224.0/20", "172.16.240.0/20"]
    }
    
    # Specify the ECS instance types of worker nodes. 
    variable "worker_instance_types" {
      description = "The ecs instance types used to launch worker nodes."
      default     = ["ecs.g6.2xlarge", "ecs.g6.xlarge"]
    }
    
    # Specify a password for the worker node.
    variable "password" {
      description = "The password of ECS instance."
      default     = "Test123456"
    }
    
    # Specify the add-ons that you want to install in the ACK managed cluster. The add-ons include Terway (network add-on), csi-plugin (volume add-on), csi-provisioner (volume add-on), logtail-ds (logging add-on), Nginx Ingress controller, ack-arms-prometheus (monitoring add-on), and ack-node-problem-detector (node diagnostics add-on). 
    variable "cluster_addons" {
      type = list(object({
        name   = string
        config = string
      }))
      default = [
        {
          "name"   = "terway-eniip",
          "config" = "",
        },
        {
          "name"   = "logtail-ds",
          "config" = "{\"IngressDashboardEnabled\":\"true\"}",
        },
        {
          "name"   = "nginx-ingress-controller",
          "config" = "{\"IngressSlbNetworkType\":\"internet\"}",
        },
        {
          "name"   = "arms-prometheus",
          "config" = "",
        },
        {
          "name"   = "ack-node-problem-detector",
          "config" = "{\"sls_project_name\":\"\"}",
        },
        {
          "name"   = "csi-plugin",
          "config" = "",
        },
        {
          "name"   = "csi-provisioner",
          "config" = "",
        }
      ]
    }
    
    # Specify the prefix of the name of the ACK managed cluster. 
    variable "k8s_name_prefix" {
      description = "The name prefix used to create managed kubernetes cluster."
      default     = "tf-ack-shenzhen"
    }
    
    # The default resource names. 
    locals {
      k8s_name_terway         = substr(join("-", [var.k8s_name_prefix, "terway"]), 0, 63)
      k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix, "flannel"]), 0, 63)
      k8s_name_ask            = substr(join("-", [var.k8s_name_prefix, "ask"]), 0, 63)
      new_vpc_name            = "tf-vpc-172-16"
      new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
      new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
      new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
      nodepool_name           = "default-nodepool"
      managed_nodepool_name   = "managed-node-pool"
      autoscale_nodepool_name = "autoscale-node-pool"
      log_project_name        = "log-for-${local.k8s_name_terway}"
    }
    
    # The ECS instance specifications of worker nodes. Terraform searches for ECS instance types that fulfill the CPU and memory requests. 
    data "alicloud_instance_types" "default" {
      cpu_core_count       = 8
      memory_size          = 32
      availability_zone    = var.availability_zone[0]
      kubernetes_node_role = "Worker"
    }
    
    # The VPC. 
    resource "alicloud_vpc" "default" {
      vpc_name   = local.new_vpc_name
      cidr_block = "172.16.0.0/12"
    }
    
    # The node vSwitches. 
    resource "alicloud_vswitch" "vswitches" {
      count      = length(var.node_vswitch_ids) > 0 ?  0 : length(var.node_vswitch_cidrs)
      vpc_id     = alicloud_vpc.default.id
      cidr_block = element(var.node_vswitch_cidrs, count.index)
      zone_id    = element(var.availability_zone, count.index)
    }
    
    # The pod vSwitches. 
    resource "alicloud_vswitch" "terway_vswitches" {
      count      = length(var.terway_vswitch_ids) > 0 ?  0 : length(var.terway_vswitch_cidrs)
      vpc_id     = alicloud_vpc.default.id
      cidr_block = element(var.terway_vswitch_cidrs, count.index)
      zone_id    = element(var.availability_zone, count.index)
    }
    
    # The ACK managed cluster. 
    resource "alicloud_cs_managed_kubernetes" "default" {
      name                         = local.k8s_name_terway                                         # The ACK cluster name. 
      cluster_spec                 = var.cluster_spec                                              # Create an ACK Pro cluster. 
      worker_vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id))        # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
      pod_vswitch_ids              = split(",", join(",", alicloud_vswitch.terway_vswitches.*.id)) # The vSwitch of the pod. 
      new_nat_gateway              = true                                                          # Specify whether to create a NAT gateway when the Kubernetes cluster is created. Default value: true. 
      service_cidr                 = "10.11.0.0/16"                                                # The pod CIDR block. If you set the cluster_network_type parameter to flannel, this parameter is required. The pod CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the pod CIDR block after the cluster is created. Maximum number of hosts in the cluster: 256. 
      slb_internet_enabled         = true                                                          # Specify whether to create an Internet-facing SLB instance for the API server of the cluster. Default value: false. Valid values: 
      enable_rrsa                  = true
      control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"] # The control plane logs. 
      dynamic "addons" {                                                      # Component management. 
        for_each = var.cluster_addons
        content {
          name   = lookup(addons.value, "name", var.cluster_addons)
          config = lookup(addons.value, "config", var.cluster_addons)
        }
      }
    }
    
    # The regular node pool. 
    resource "alicloud_cs_kubernetes_node_pool" "default" {
      cluster_id            = alicloud_cs_managed_kubernetes.default.id              # The ACK cluster name. 
      node_pool_name        = local.nodepool_name                                    # The node pool name. 
      vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
      instance_types        = var.worker_instance_types
      instance_charge_type  = "PostPaid"
      desired_size          = 2            # The expected number of nodes in the node pool. 
      password              = var.password # The password that is used to log on to the cluster by using SSH. 
      install_cloud_monitor = true         # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux"
      data_disks {              # The data disk configuration of the node. 
        category = "cloud_essd" # The disk type. 
        size     = 120          # The disk size. 
      }
    }
    
    # Create a managed node pool. 
    resource "alicloud_cs_kubernetes_node_pool" "managed_node_pool" {
      cluster_id     = alicloud_cs_managed_kubernetes.default.id              # The ACK cluster name. 
      node_pool_name = local.managed_nodepool_name                            # The node pool name. 
      vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
      desired_size   = 0                                                      # The expected number of nodes in the node pool. 
      management {
        auto_repair     = true
        auto_upgrade    = true
        max_unavailable = 1
      }
      instance_types        = var.worker_instance_types
      instance_charge_type  = "PostPaid"
      password              = var.password
      install_cloud_monitor = true
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux"
      data_disks {
        category = "cloud_essd"
        size     = 120
      }
    }
    
    # Create a node pool for which auto scaling is enabled. The node pool can be scaled out to a maximum of 10 nodes and must contain at least 1 node. 
    resource "alicloud_cs_kubernetes_node_pool" "autoscale_node_pool" {
      cluster_id     = alicloud_cs_managed_kubernetes.default.id
      node_pool_name = local.autoscale_nodepool_name
      vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id))
      scaling_config {
        min_size = 1
        max_size = 10
      }
      instance_types        = var.worker_instance_types
      password              = var.password # The password that is used to log on to the cluster by using SSH. 
      install_cloud_monitor = true         # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux3"
      data_disks {              # The data disk configuration of the node. 
        category = "cloud_essd" # The disk type. 
        size     = 120          # The disk size. 
      }
    }
  2. Run the following command to initialize the runtime environment for Terraform:

    terraform init

    If the following information is returned, Terraform is initialized:

    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
  3. Create an execution plan and preview the changes.

    terraform plan
  4. Run the following command to create a cluster:

    terraform apply

    During command execution, follow the instructions to enter yes and press Enter. Wait until the command is run. If the following information is returned, an ACK cluster is created.

    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    
    ...
    alicloud_cs_managed_kubernetes.default: Creation complete after 5m48s [id=ccb53e72ec6c447c990762800********]
    ...
    
    Apply complete!  Resources: 11 added, 0 changed, 0 destroyed.
  5. Verify the result.

    Run the terraform show command

    Run the following command to query the resources that are created by Terraform:

    terraform show

    Log on to the ACK console

    Log on to the ACK console to view the created clusters.

Clear resources

If you no longer require the preceding resources created or managed by Terraform, run the terraform destroy command to release the resources. For more information about terraform destroy, see Common commands.

terraform destroy

Complete sample code

Note

You can run the sample code with one click in OpenAPI Portal.

provider "alicloud" {
  region = var.region_id
}

variable "region_id" {
  type    = string
  default = "cn-shenzhen"
}

variable "cluster_spec" {
  type        = string
  description = "The cluster specifications of kubernetes cluster,which can be empty. Valid values:ack.standard : Standard managed clusters; ack.pro.small : Professional managed clusters."
  default     = "ack.pro.small"
}

# Specify the zones of vSwitches. 
variable "availability_zone" {
  description = "The availability zones of vswitches."
  default     = ["cn-shenzhen-c", "cn-shenzhen-e", "cn-shenzhen-f"]
}

# Specify vSwitch IDs. 
variable "node_vswitch_ids" {
  description = "List of existing node vswitch ids for terway."
  type        = list(string)
  default     = []
}

# The CIDR blocks used to create vSwitches. 
variable "node_vswitch_cidrs" {
  description = "List of cidr blocks used to create several new vswitches when 'node_vswitch_ids' is not specified."
  type        = list(string)
  default     = ["172.16.0.0/23", "172.16.2.0/23", "172.16.4.0/23"]
}

# Specify the Terway configurations. If you leave this variable empty, new Terway vSwitches are created based on the value of the terway_vswitch_cidrs variable. 
variable "terway_vswitch_ids" {
  description = "List of existing pod vswitch ids for terway."
  type        = list(string)
  default     = []
}

# This variable specifies the CIDR blocks in which Terway vSwitches are created when terway_vswitch_ids is not specified. 
variable "terway_vswitch_cidrs" {
  description = "List of cidr blocks used to create several new vswitches when 'terway_vswitch_ids' is not specified."
  type        = list(string)
  default     = ["172.16.208.0/20", "172.16.224.0/20", "172.16.240.0/20"]
}

# Specify the ECS instance types of worker nodes. 
variable "worker_instance_types" {
  description = "The ecs instance types used to launch worker nodes."
  default     = ["ecs.g6.2xlarge", "ecs.g6.xlarge"]
}

# Specify a password for the worker node.
variable "password" {
  description = "The password of ECS instance."
  default     = "Test123456"
}

# Specify the add-ons that you want to install in the ACK managed cluster. The add-ons include Terway (network add-on), csi-plugin (volume add-on), csi-provisioner (volume add-on), logtail-ds (logging add-on), Nginx Ingress controller, ack-arms-prometheus (monitoring add-on), and ack-node-problem-detector (node diagnostics add-on). 
variable "cluster_addons" {
  type = list(object({
    name   = string
    config = string
  }))
  default = [
    {
      "name"   = "terway-eniip",
      "config" = "",
    },
    {
      "name"   = "logtail-ds",
      "config" = "{\"IngressDashboardEnabled\":\"true\"}",
    },
    {
      "name"   = "nginx-ingress-controller",
      "config" = "{\"IngressSlbNetworkType\":\"internet\"}",
    },
    {
      "name"   = "arms-prometheus",
      "config" = "",
    },
    {
      "name"   = "ack-node-problem-detector",
      "config" = "{\"sls_project_name\":\"\"}",
    },
    {
      "name"   = "csi-plugin",
      "config" = "",
    },
    {
      "name"   = "csi-provisioner",
      "config" = "",
    }
  ]
}

# Specify the prefix of the name of the ACK managed cluster. 
variable "k8s_name_prefix" {
  description = "The name prefix used to create managed kubernetes cluster."
  default     = "tf-ack-shenzhen"
}

# The default resource names. 
locals {
  k8s_name_terway         = substr(join("-", [var.k8s_name_prefix, "terway"]), 0, 63)
  k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix, "flannel"]), 0, 63)
  k8s_name_ask            = substr(join("-", [var.k8s_name_prefix, "ask"]), 0, 63)
  new_vpc_name            = "tf-vpc-172-16"
  new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
  new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
  new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
  nodepool_name           = "default-nodepool"
  managed_nodepool_name   = "managed-node-pool"
  autoscale_nodepool_name = "autoscale-node-pool"
  log_project_name        = "log-for-${local.k8s_name_terway}"
}

# The ECS instance specifications of worker nodes. Terraform searches for ECS instance types that fulfill the CPU and memory requests. 
data "alicloud_instance_types" "default" {
  cpu_core_count       = 8
  memory_size          = 32
  availability_zone    = var.availability_zone[0]
  kubernetes_node_role = "Worker"
}

# The VPC. 
resource "alicloud_vpc" "default" {
  vpc_name   = local.new_vpc_name
  cidr_block = "172.16.0.0/12"
}

# The node vSwitches. 
resource "alicloud_vswitch" "vswitches" {
  count      = length(var.node_vswitch_ids) > 0 ?  0 : length(var.node_vswitch_cidrs)
  vpc_id     = alicloud_vpc.default.id
  cidr_block = element(var.node_vswitch_cidrs, count.index)
  zone_id    = element(var.availability_zone, count.index)
}

# The pod vSwitches. 
resource "alicloud_vswitch" "terway_vswitches" {
  count      = length(var.terway_vswitch_ids) > 0 ?  0 : length(var.terway_vswitch_cidrs)
  vpc_id     = alicloud_vpc.default.id
  cidr_block = element(var.terway_vswitch_cidrs, count.index)
  zone_id    = element(var.availability_zone, count.index)
}

# The ACK managed cluster. 
resource "alicloud_cs_managed_kubernetes" "default" {
  name                         = local.k8s_name_terway # The ACK cluster name. 
  cluster_spec                 = var.cluster_spec      # Create an ACK Pro cluster. 
  worker_vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id))        # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
  pod_vswitch_ids              = split(",", join(",", alicloud_vswitch.terway_vswitches.*.id)) # The vSwitch of the pod. 
  new_nat_gateway              = true                                                          # Specify whether to create a NAT gateway when the Kubernetes cluster is created. Default value: true. 
  service_cidr                 = "10.11.0.0/16"                                                # The pod CIDR block. If you set the cluster_network_type parameter to flannel, this parameter is required. The pod CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the pod CIDR block after the cluster is created. Maximum number of hosts in the cluster: 256. 
  slb_internet_enabled         = true                                                          # Specify whether to create an Internet-facing SLB instance for the API server of the cluster. Default value: false. Valid values: 
  enable_rrsa                  = true
  control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"] # The control plane logs. 
  dynamic "addons" { # Component management. 
    for_each = var.cluster_addons
    content {
      name   = lookup(addons.value, "name", var.cluster_addons)
      config = lookup(addons.value, "config", var.cluster_addons)
    }
  }
}

# The regular node pool. 
resource "alicloud_cs_kubernetes_node_pool" "default" {
  cluster_id            = alicloud_cs_managed_kubernetes.default.id              # The ACK cluster name. 
  node_pool_name        = local.nodepool_name                                    # The node pool name. 
  vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
  instance_types        = var.worker_instance_types
  instance_charge_type  = "PostPaid"
  desired_size          = 2            # The expected number of nodes in the node pool. 
  password              = var.password # The password that is used to log on to the cluster by using SSH. 
  install_cloud_monitor = true         # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
  system_disk_category  = "cloud_efficiency"
  system_disk_size      = 100
  image_type            = "AliyunLinux"
  data_disks {              # The data disk configuration of the node. 
    category = "cloud_essd" # The disk type. 
    size     = 120          # The disk size. 
  }
}

# Create a managed node pool. 
resource "alicloud_cs_kubernetes_node_pool" "managed_node_pool" {
  cluster_id     = alicloud_cs_managed_kubernetes.default.id              # The ACK cluster name. 
  node_pool_name = local.managed_nodepool_name                            # The node pool name. 
  vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
  desired_size   = 0                                                      # The expected number of nodes in the node pool. 
  management {
    auto_repair     = true
    auto_upgrade    = true
    max_unavailable = 1
  }
  instance_types        = var.worker_instance_types
  instance_charge_type  = "PostPaid"
  password              = var.password
  install_cloud_monitor = true
  system_disk_category  = "cloud_efficiency"
  system_disk_size      = 100
  image_type            = "AliyunLinux"
  data_disks {
    category = "cloud_essd"
    size     = 120
  }
}

# Create a node pool for which auto scaling is enabled. The node pool can be scaled out to a maximum of 10 nodes and must contain at least 1 node. 
resource "alicloud_cs_kubernetes_node_pool" "autoscale_node_pool" {
  cluster_id     = alicloud_cs_managed_kubernetes.default.id
  node_pool_name = local.autoscale_nodepool_name
  vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id))
  scaling_config {
    min_size = 1
    max_size = 10
  }
  instance_types        = var.worker_instance_types
  password              = var.password # The password that is used to log on to the cluster by using SSH. 
  install_cloud_monitor = true         # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
  system_disk_category  = "cloud_efficiency"
  system_disk_size      = 100
  image_type            = "AliyunLinux3"
  data_disks {              # The data disk configuration of the node. 
    category = "cloud_essd" # The disk type. 
    size     = 120          # The disk size. 
  }
}