すべてのプロダクト
Search
ドキュメントセンター

Container Service for Kubernetes:Terraformを使用したACK管理クラスターの作成

最終更新日:Dec 06, 2024

このトピックでは、Terraformを使用してACK管理クラスターを作成する方法について説明します。

説明

このトピックのサンプルコードは、OpenAPI Portalでワンクリックで実行できます。

前提条件

  • Container Service for Kubernetes (ACK) が有効化されています。 Terraformを使用してACKをアクティブ化する場合は、「ACKをアクティブ化してACKにデフォルトのロールを割り当てる」をご参照ください。

  • デフォルトでは、Alibaba Cloudアカウントには、このアカウントに属するすべてのリソースに対する完全な権限があります。 Alibaba Cloudアカウントの資格情報が漏洩すると、セキュリティリスクが発生する可能性があります。 Resource Access Management (RAM) ユーザーを使用してリソースを管理することを推奨します。 RAMユーザーを作成するときは、RAMユーザーのAccessKeyペアを作成する必要があります。 詳細については、「RAMユーザーの作成」および「AccessKeyペアの作成」をご参照ください。

  • Terraformでコマンドを実行するために使用するRAMユーザーには、次のポリシーが添付されています。 ポリシーには、Terraformでコマンドを実行するために必要な最小限の権限が含まれます。 詳細については、「RAM ユーザーへの権限の付与」をご参照ください。

    このポリシーにより、Resource Access Management (RAM) ユーザーは、仮想プライベートクラウド (VPC) 、vSwitch、およびContainer Service for Kubernetes (ACK) クラスターを作成、表示、および削除できます。

    {
      "Version": "1",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "vpc:CreateVpc",
            "vpc:CreateVSwitch",
            "cs:CreateCluster",
            "vpc:DescribeVpcAttribute",
            "vpc:DescribeVSwitchAttributes",
            "vpc:DescribeRouteTableList",
            "vpc:DescribeNatGateways",
            "cs:DescribeTaskInfo",
            "cs:DescribeClusterDetail",
            "cs:GetClusterCerts",
            "cs:CheckControlPlaneLogEnable",
            "cs:CreateClusterNodePool",
            "cs:DescribeClusterNodePoolDetail",
            "cs:ModifyClusterNodePool",
            "vpc:DeleteVpc",
            "vpc:DeleteVSwitch",
            "cs:DeleteCluster",
            "cs:DeleteClusterNodepool"
          ],
          "Resource": "*"
        }
      ]
    }
  • Terraformのランタイム環境は、次のいずれかの方法を使用して準備されます。

    • Terraform Explorer: Alibaba Cloudは、Terraformのオンラインランタイム環境を提供します。 Terraformをインストールしなくても、环境にログオンしてTerraformを使用できます。 この方法は、低コストで効率的かつ便利な方法でTerraformを使用およびデバッグする必要があるシナリオに適しています。

    • Cloud ShellでTerraformを使用: Cloud ShellはTerraformとともにプリインストールされ、ID認証情報で構成されています。 Cloud ShellでTerraformコマンドを実行できます。 この方法は、低コストで効率的かつ便利な方法でTerraformを使用してアクセスする必要があるシナリオに適しています。

    • オンプレミスマシンにTerraformをインストールして構成する: この方法は、ネットワーク接続が不安定な場合やカスタム開発環境が必要な場合に適しています。

    重要

    Terraform 0.12.28以降をインストールする必要があります。 terraform -- versionコマンドを実行して、Terraformバージョンを照会できます。

使用中のリソース

説明

この例では、特定のリソースに対して料金が生成されます。 リソースが不要になったときにリソースを解放または退会します。

Terraformを使用してACK管理クラスターを作成する (Terway)

この例では、通常ノードプール、管理対象ノードプール、および自動スケーリングが有効になっているノードプールを含むACK管理クラスターが作成されます。 デフォルトでは、Terway (network add-on) 、csi-plugin (volume add-on) 、csi-provisioner (volume add-on) 、logtail-ds (loging add-on) 、Nginx Ingress controller、ack-arms-prometheus (monitoring add-on) 、ack-node-problem-detector (ノード診断アドオン) 。

  1. 作業ディレクトリとそのディレクトリの下にmain.tfという名前のファイルを作成します。 次に、次の内容をmain.tfファイルにコピーします。

    provider "alicloud" {
      region = var.region_id
    }
    
    variable "region_id" {
      type    = string
      default = "cn-shenzhen"
    }
    
    variable "cluster_spec" {
      type        = string
      description = "The cluster specifications of kubernetes cluster,which can be empty. Valid values:ack.standard : Standard managed clusters; ack.pro.small : Professional managed clusters."
      default     = "ack.pro.small"
    }
    
    variable "ack_version" {
      type        = string
      description = "Desired Kubernetes version. "
      default     = "1.28.9-aliyun.1"
    }
    
    # Specify the zones of vSwitches. 
    variable "availability_zone" {
      description = "The availability zones of vswitches."
      default     = ["cn-shenzhen-c", "cn-shenzhen-e", "cn-shenzhen-f"]
    }
    
    # Specify vSwitch IDs. 
    variable "node_vswitch_ids" {
      description = "List of existing node vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    # This variable specifies the CIDR blocks in which vSwitches are created if you do not specify the node_vswitch_ids variable . 
    variable "node_vswitch_cidrs" {
      description = "List of cidr blocks used to create several new vswitches when 'node_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.0.0/23", "172.16.2.0/23", "172.16.4.0/23"]
    }
    
    # Specify the Terway configurations. If you leave this variable empty, new Terway vSwitches are created based on the value of the terway_vswitch_cidrs variable. 
    variable "terway_vswitch_ids" {
      description = "List of existing pod vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    # This variable specifies the CIDR blocks in which Terway vSwitches are created when terway_vswitch_ids is not specified. 
    variable "terway_vswitch_cidrs" {
      description = "List of cidr blocks used to create several new vswitches when 'terway_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.208.0/20", "172.16.224.0/20", "172.16.240.0/20"]
    }
    
    # Specify the ECS instance types of worker nodes. 
    variable "worker_instance_types" {
      description = "The ecs instance types used to launch worker nodes."
      default     = ["ecs.g6.2xlarge", "ecs.g6.xlarge"]
    }
    
    # Specify a password for the worker node.
    variable "password" {
      description = "The password of ECS instance."
      default     = "Test123456"
    }
    
    # Specify the add-ons that you want to install in the ACK managed cluster. The add-ons include Terway (network add-on), csi-plugin (volume add-on), csi-provisioner (volume add-on), logtail-ds (logging add-on), Nginx Ingress controller, ack-arms-prometheus (monitoring add-on), and ack-node-problem-detector (node diagnostics add-on). 
    variable "cluster_addons" {
      type = list(object({
        name   = string
        config = string
      }))
    
      default = [
        {
          "name"   = "terway-eniip",
          "config" = "",
        },
        {
          "name"   = "logtail-ds",
          "config" = "{\"IngressDashboardEnabled\":\"true\"}",
        },
        {
          "name"   = "nginx-ingress-controller",
          "config" = "{\"IngressSlbNetworkType\":\"internet\"}",
        },
        {
          "name"   = "arms-prometheus",
          "config" = "",
        },
        {
          "name"   = "ack-node-problem-detector",
          "config" = "{\"sls_project_name\":\"\"}",
        },
        {
          "name"   = "csi-plugin",
          "config" = "",
        },
        {
          "name"   = "csi-provisioner",
          "config" = "",
        }
      ]
    }
    
    # Specify the prefix of the name of the ACK managed cluster. 
    variable "k8s_name_prefix" {
      description = "The name prefix used to create managed kubernetes cluster."
      default     = "tf-ack-shenzhen"
    }
    
    # The default resource names. 
    locals {
      k8s_name_terway         = substr(join("-", [var.k8s_name_prefix, "terway"]), 0, 63)
      k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix, "flannel"]), 0, 63)
      k8s_name_ask            = substr(join("-", [var.k8s_name_prefix, "ask"]), 0, 63)
      new_vpc_name            = "tf-vpc-172-16"
      new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
      new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
      new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
      nodepool_name           = "default-nodepool"
      managed_nodepool_name   = "managed-node-pool"
      autoscale_nodepool_name = "autoscale-node-pool"
      log_project_name        = "log-for-${local.k8s_name_terway}"
    }
    
    # The ECS instance specifications of worker nodes. Terraform searches for ECS instance types that fulfill the CPU and memory requests. 
    data "alicloud_instance_types" "default" {
      cpu_core_count       = 8
      memory_size          = 32
      availability_zone    = var.availability_zone[0]
      kubernetes_node_role = "Worker"
    }
    
    # The VPC. 
    resource "alicloud_vpc" "default" {
      vpc_name   = local.new_vpc_name
      cidr_block = "172.16.0.0/12"
    }
    
    # The node vSwitches. 
    resource "alicloud_vswitch" "vswitches" {
      count      = length(var.node_vswitch_ids) > 0 ?  0 : length(var.node_vswitch_cidrs)
      vpc_id     = alicloud_vpc.default.id
      cidr_block = element(var.node_vswitch_cidrs, count.index)
      zone_id    = element(var.availability_zone, count.index)
    }
    
    # The pod vSwitches. 
    resource "alicloud_vswitch" "terway_vswitches" {
      count      = length(var.terway_vswitch_ids) > 0 ?  0 : length(var.terway_vswitch_cidrs)
      vpc_id     = alicloud_vpc.default.id
      cidr_block = element(var.terway_vswitch_cidrs, count.index)
      zone_id    = element(var.availability_zone, count.index)
    }
    
    # The ACK managed cluster. 
    resource "alicloud_cs_managed_kubernetes" "default" {
      name                         = local.k8s_name_terway # The ACK cluster name. 
      cluster_spec                 = var.cluster_spec      # Create an ACK Pro cluster. 
      version                      = var.ack_version
      worker_vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id))        # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
      pod_vswitch_ids              = split(",", join(",", alicloud_vswitch.terway_vswitches.*.id)) # The vSwitch of the pod. 
      new_nat_gateway              = true                                                          # Specify whether to create a NAT gateway when the Kubernetes cluster is created. Default value: true. 
      service_cidr                 = "10.11.0.0/16"                                                # The pod CIDR block. If you set the cluster_network_type parameter to flannel, this parameter is required. The pod CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the pod CIDR block after the cluster is created. Maximum number of hosts in the cluster: 256. 
      slb_internet_enabled         = true                                                          # Specify whether to create an Internet-facing SLB instance for the API server of the cluster. Default value: false. Valid values: 
      enable_rrsa                  = true
      control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"] # The control plane logs. 
    
      dynamic "addons" { # Component management. 
        for_each = var.cluster_addons
        content {
          name   = lookup(addons.value, "name", var.cluster_addons)
          config = lookup(addons.value, "config", var.cluster_addons)
        }
      }
    }
    
    # The regular node pool. 
    resource "alicloud_cs_kubernetes_node_pool" "default" {
      cluster_id            = alicloud_cs_managed_kubernetes.default.id              # The ACK cluster name. 
      node_pool_name        = local.nodepool_name                                    # The node pool name. 
      vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
      instance_types        = var.worker_instance_types
      instance_charge_type  = "PostPaid"
      runtime_name          = "containerd"
      runtime_version       = "1.6.20"
      desired_size          = 2            # The expected number of nodes in the node pool. 
      password              = var.password # The password that is used to log on to the cluster by using SSH. 
      install_cloud_monitor = true         # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux"
    
      data_disks {              # The data disk configuration of the node. 
        category = "cloud_essd" # The disk type. 
        size     = 120          # The disk size. 
      }
    }
    
    # Create a managed node pool. 
    resource "alicloud_cs_kubernetes_node_pool" "managed_node_pool" {
      cluster_id     = alicloud_cs_managed_kubernetes.default.id              # The ACK cluster name. 
      node_pool_name = local.managed_nodepool_name                            # The node pool name. 
      vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
      desired_size   = 0                                                      # The expected number of nodes in the node pool. 
    
      management {
        auto_repair     = true
        auto_upgrade    = true
        max_unavailable = 1
      }
    
      instance_types        = var.worker_instance_types
      instance_charge_type  = "PostPaid"
      runtime_name          = "containerd"
      runtime_version       = "1.6.20"
      password              = var.password
      install_cloud_monitor = true
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux"
    
      data_disks {
        category = "cloud_essd"
        size     = 120
      }
    }
    
    # Create a node pool for which auto scaling is enabled. The node pool can be scaled out to a maximum of 10 nodes and must contain at least 1 node. 
    resource "alicloud_cs_kubernetes_node_pool" "autoscale_node_pool" {
      cluster_id     = alicloud_cs_managed_kubernetes.default.id
      node_pool_name = local.autoscale_nodepool_name
      vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id))
    
      scaling_config {
        min_size = 1
        max_size = 10
      }
    
      instance_types        = var.worker_instance_types
      runtime_name          = "containerd"
      runtime_version       = "1.6.20"
      password              = var.password # The password that is used to log on to the cluster by using SSH. 
      install_cloud_monitor = true         # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux3"
    
      data_disks {              # The data disk configuration of the node. 
        category = "cloud_essd" # The disk type. 
        size     = 120          # The disk size. 
      }
    }
  2. 次のコマンドを実行して、Terraformのランタイム環境を初期化します。

    terraform init

    次の情報が返されると、Terraformは初期化されます。

    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
  3. 実行計画を作成し、変更をプレビューします。

    terraform plan
  4. 次のコマンドを実行してクラスターを作成します。

    terraform apply

    コマンドの実行中に、yesと入力し、enterを押します。 コマンドが実行されるまで待ちます。 次の情報が返された場合、ACKクラスターが作成されます。

    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    
    ...
    alicloud_cs_managed_kubernetes.default: Creation complete after 5m48s [id=ccb53e72ec6c447c990762800********]
    ...
    
    Apply complete!  Resources: 11 added, 0 changed, 0 destroyed.
  5. 結果を確認します。

    terraform showコマンドを実行します。

    次のコマンドを実行して、Terraformによって作成されたリソースを照会します。

    terraform show

    ACKコンソールへのログイン

    ACKコンソールにログインし、作成したクラスターを表示します。

リソースのクリア

Terraformによって作成または管理された前述のリソースが不要になった場合は、terraform destroyコマンドを実行してリソースを解放します。 terraform destroyの詳細については、「一般的なコマンド」をご参照ください。

terraform destroy

完全なサンプルコード

説明

OpenAPI Portalでワンクリックでサンプルコードを実行できます。

provider "alicloud" {
  region = var.region_id
}

variable "region_id" {
  type    = string
  default = "cn-shenzhen"
}

variable "cluster_spec" {
  type        = string
  description = "The cluster specifications of kubernetes cluster,which can be empty. Valid values:ack.standard : Standard managed clusters; ack.pro.small : Professional managed clusters."
  default     = "ack.pro.small"
}

variable "ack_version" {
  type        = string
  description = "Desired Kubernetes version. "
  default     = "1.28.9-aliyun.1"
}

# Specify the zones of vSwitches. 
variable "availability_zone" {
  description = "The availability zones of vswitches."
  default     = ["cn-shenzhen-c", "cn-shenzhen-e", "cn-shenzhen-f"]
}

# Specify vSwitch IDs. 
variable "node_vswitch_ids" {
  description = "List of existing node vswitch ids for terway."
  type        = list(string)
  default     = []
}

# The CIDR blocks used to create vSwitches. 
variable "node_vswitch_cidrs" {
  description = "List of cidr blocks used to create several new vswitches when 'node_vswitch_ids' is not specified."
  type        = list(string)
  default     = ["172.16.0.0/23", "172.16.2.0/23", "172.16.4.0/23"]
}

# Specify the Terway configurations. If you leave this variable empty, new Terway vSwitches are created based on the value of the terway_vswitch_cidrs variable. 
variable "terway_vswitch_ids" {
  description = "List of existing pod vswitch ids for terway."
  type        = list(string)
  default     = []
}

# This variable specifies the CIDR blocks in which Terway vSwitches are created when terway_vswitch_ids is not specified. 
variable "terway_vswitch_cidrs" {
  description = "List of cidr blocks used to create several new vswitches when 'terway_vswitch_ids' is not specified."
  type        = list(string)
  default     = ["172.16.208.0/20", "172.16.224.0/20", "172.16.240.0/20"]
}

# Specify the ECS instance types of worker nodes. 
variable "worker_instance_types" {
  description = "The ecs instance types used to launch worker nodes."
  default     = ["ecs.g6.2xlarge", "ecs.g6.xlarge"]
}

# Specify a password for the worker node.
variable "password" {
  description = "The password of ECS instance."
  default     = "Test123456"
}

# Specify the add-ons that you want to install in the ACK managed cluster. The add-ons include Terway (network add-on), csi-plugin (volume add-on), csi-provisioner (volume add-on), logtail-ds (logging add-on), Nginx Ingress controller, ack-arms-prometheus (monitoring add-on), and ack-node-problem-detector (node diagnostics add-on). 
variable "cluster_addons" {
  type = list(object({
    name   = string
    config = string
  }))

  default = [
    {
      "name"   = "terway-eniip",
      "config" = "",
    },
    {
      "name"   = "logtail-ds",
      "config" = "{\"IngressDashboardEnabled\":\"true\"}",
    },
    {
      "name"   = "nginx-ingress-controller",
      "config" = "{\"IngressSlbNetworkType\":\"internet\"}",
    },
    {
      "name"   = "arms-prometheus",
      "config" = "",
    },
    {
      "name"   = "ack-node-problem-detector",
      "config" = "{\"sls_project_name\":\"\"}",
    },
    {
      "name"   = "csi-plugin",
      "config" = "",
    },
    {
      "name"   = "csi-provisioner",
      "config" = "",
    }
  ]
}

# Specify the prefix of the name of the ACK managed cluster. 
variable "k8s_name_prefix" {
  description = "The name prefix used to create managed kubernetes cluster."
  default     = "tf-ack-shenzhen"
}

# The default resource names. 
locals {
  k8s_name_terway         = substr(join("-", [var.k8s_name_prefix, "terway"]), 0, 63)
  k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix, "flannel"]), 0, 63)
  k8s_name_ask            = substr(join("-", [var.k8s_name_prefix, "ask"]), 0, 63)
  new_vpc_name            = "tf-vpc-172-16"
  new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
  new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
  new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
  nodepool_name           = "default-nodepool"
  managed_nodepool_name   = "managed-node-pool"
  autoscale_nodepool_name = "autoscale-node-pool"
  log_project_name        = "log-for-${local.k8s_name_terway}"
}

# The ECS instance specifications of worker nodes. Terraform searches for ECS instance types that fulfill the CPU and memory requests. 
data "alicloud_instance_types" "default" {
  cpu_core_count       = 8
  memory_size          = 32
  availability_zone    = var.availability_zone[0]
  kubernetes_node_role = "Worker"
}

# The VPC. 
resource "alicloud_vpc" "default" {
  vpc_name   = local.new_vpc_name
  cidr_block = "172.16.0.0/12"
}

# The node vSwitches. 
resource "alicloud_vswitch" "vswitches" {
  count      = length(var.node_vswitch_ids) > 0 ?  0 : length(var.node_vswitch_cidrs)
  vpc_id     = alicloud_vpc.default.id
  cidr_block = element(var.node_vswitch_cidrs, count.index)
  zone_id    = element(var.availability_zone, count.index)
}

# The pod vSwitches. 
resource "alicloud_vswitch" "terway_vswitches" {
  count      = length(var.terway_vswitch_ids) > 0 ?  0 : length(var.terway_vswitch_cidrs)
  vpc_id     = alicloud_vpc.default.id
  cidr_block = element(var.terway_vswitch_cidrs, count.index)
  zone_id    = element(var.availability_zone, count.index)
}

# The ACK managed cluster. 
resource "alicloud_cs_managed_kubernetes" "default" {
  name                         = local.k8s_name_terway # The ACK cluster name. 
  cluster_spec                 = var.cluster_spec      # Create an ACK Pro cluster. 
  version                      = var.ack_version
  worker_vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id))        # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
  pod_vswitch_ids              = split(",", join(",", alicloud_vswitch.terway_vswitches.*.id)) # The vSwitch of the pod. 
  new_nat_gateway              = true                                                          # Specify whether to create a NAT gateway when the Kubernetes cluster is created. Default value: true. 
  service_cidr                 = "10.11.0.0/16"                                                # The pod CIDR block. If you set the cluster_network_type parameter to flannel, this parameter is required. The pod CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the pod CIDR block after the cluster is created. Maximum number of hosts in the cluster: 256. 
  slb_internet_enabled         = true                                                          # Specify whether to create an Internet-facing SLB instance for the API server of the cluster. Default value: false. Valid values: 
  enable_rrsa                  = true
  control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"] # The control plane logs. 

  dynamic "addons" { # Component management. 
    for_each = var.cluster_addons
    content {
      name   = lookup(addons.value, "name", var.cluster_addons)
      config = lookup(addons.value, "config", var.cluster_addons)
    }
  }
}

# The regular node pool. 
resource "alicloud_cs_kubernetes_node_pool" "default" {
  cluster_id            = alicloud_cs_managed_kubernetes.default.id              # The ACK cluster name. 
  node_pool_name        = local.nodepool_name                                    # The node pool name. 
  vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
  instance_types        = var.worker_instance_types
  instance_charge_type  = "PostPaid"
  runtime_name          = "containerd"
  runtime_version       = "1.6.20"
  desired_size          = 2            # The expected number of nodes in the node pool. 
  password              = var.password # The password that is used to log on to the cluster by using SSH. 
  install_cloud_monitor = true         # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
  system_disk_category  = "cloud_efficiency"
  system_disk_size      = 100
  image_type            = "AliyunLinux"

  data_disks {              # The data disk configuration of the node. 
    category = "cloud_essd" # The disk type. 
    size     = 120          # The disk size. 
  }
}

# Create a managed node pool. 
resource "alicloud_cs_kubernetes_node_pool" "managed_node_pool" {
  cluster_id     = alicloud_cs_managed_kubernetes.default.id              # The ACK cluster name. 
  node_pool_name = local.managed_nodepool_name                            # The node pool name. 
  vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitch to which the node pool belongs. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
  desired_size   = 0                                                      # The expected number of nodes in the node pool. 

  management {
    auto_repair     = true
    auto_upgrade    = true
    max_unavailable = 1
  }

  instance_types        = var.worker_instance_types
  instance_charge_type  = "PostPaid"
  runtime_name          = "containerd"
  runtime_version       = "1.6.20"
  password              = var.password
  install_cloud_monitor = true
  system_disk_category  = "cloud_efficiency"
  system_disk_size      = 100
  image_type            = "AliyunLinux"

  data_disks {
    category = "cloud_essd"
    size     = 120
  }
}

# Create a node pool for which auto scaling is enabled. The node pool can be scaled out to a maximum of 10 nodes and must contain at least 1 node. 
resource "alicloud_cs_kubernetes_node_pool" "autoscale_node_pool" {
  cluster_id     = alicloud_cs_managed_kubernetes.default.id
  node_pool_name = local.autoscale_nodepool_name
  vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id))

  scaling_config {
    min_size = 1
    max_size = 10
  }

  instance_types        = var.worker_instance_types
  runtime_name          = "containerd"
  runtime_version       = "1.6.20"
  password              = var.password # The password that is used to log on to the cluster by using SSH. 
  install_cloud_monitor = true         # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
  system_disk_category  = "cloud_efficiency"
  system_disk_size      = 100
  image_type            = "AliyunLinux3"

  data_disks {              # The data disk configuration of the node. 
    category = "cloud_essd" # The disk type. 
    size     = 120          # The disk size. 
  }
}