All Products
Search
Document Center

Container Service for Kubernetes:Use Terraform to create an ACK managed cluster

Last Updated:Sep 03, 2024

This topic describes how to use Terraform to create an ACK managed cluster.

Prerequisites

  • Terraform is installed.

    Note

    You must install Terraform 0.12.28 or later. You can run the terraform --version command to query the Terraform version.

    When you install Terraform, take note of the following items:

  • Your account information is configured.

    Run the following commands to create environment variables to store identity information.

    The identity information of an Alibaba Cloud account is sensitive data because an Alibaba Cloud account has full permissions on the resources that belong to the account. We recommend that you create a Resource Access Management (RAM) user named Terraform. Then, create an AccessKey pair for the RAM user and grant permissions to the RAM user. For more information, see Create a RAM user and Grant permissions to a RAM user.

    Linux environment

    export ALICLOUD_ACCESS_KEY="************"   # Replace the value with the AccessKey ID of your Alibaba Cloud account. 
    export ALICLOUD_SECRET_KEY="************"   # Replace the value with the AccessKey secret of your Alibaba Cloud account. 
    export ALICLOUD_REGION="cn-beijing"         # Replace the value with the ID of the region in which your cluster resides.

    Windows environment

    set ALICLOUD_ACCESS_KEY="************"   # Replace the value with the AccessKey ID of your Alibaba Cloud account. 
    set ALICLOUD_SECRET_KEY="************"   # Replace the value with the AccessKey secret of your Alibaba Cloud account. 
    set ALICLOUD_REGION="cn-beijing"         # Replace the value with the ID of the region in which your cluster resides.
  • Container Service for Kubernetes (ACK) is activated. If you want to use Terraform to activate ACK, refer to Activate ACK and assign default roles to ACK.

  • Create a working directory and create a configuration file named variable.tf in the directory.

    In Terraform, you can define input variables in the variable.tf file and reuse these variables in other configuration files. In subsequent steps, the preceding variables are added to the main.tf file to configure Terraform resources.

    Create a virtual private cloud (VPC), three vSwitches in the VPC, and three pod vSwitches.

    When you create an ACK managed cluster, the following add-ons are installed in the cluster by default: Terway (network add-on), csi-plugin (volume add-on), csi-provisioner (volume add-on), logtail-ds (logging add-on), Nginx Ingress controller, ack-arms-prometheus (monitoring add-on), and ack-node-problem-detector (node diagnostics add-on).

    View the variable.tf file that is used in this example

    variable "availability_zone" {  # Specify the zones of vSwitches. 
      description = "The availability zones of vswitches."
      # Make sure that the same region is specified in the main.tf and variable.tf files. 
      default     = ["cn-shenzhen-c", "cn-shenzhen-e", "cn-shenzhen-f"]
    } 
    
    variable "node_vswitch_ids" { # Specify vSwitch IDs. 
      description = "List of existing node vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    variable "node_vswitch_cidrs" { # This variable specifies the CIDR blocks in which vSwitches are created when node_vswitch_ids is not specified. 
      description = "List of cidr blocks used to create several new vswitches when 'node_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.0.0/23", "172.16.2.0/23", "172.16.4.0/23"]
    }
    
    variable "terway_vswitch_ids" { # Specify the IDs of Terway vSwitches. If you leave this variable empty, Terway vSwitches are created in the CIDR blocks specified in terway_vswitch_cidrs by default. 
      description = "List of existing pod vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    variable "terway_vswitch_cidrs" {  # This variable specifies the CIDR blocks in which Terway vSwitches are created when terway_vswitch_ids is not specified. 
      description = "List of cidr blocks used to create several new vswitches when 'terway_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.208.0/20", "172.16.224.0/20", "172.16.240.0/20"]
    }
    
    # Node Pool worker_instance_types
    variable "worker_instance_types" { # Specify the instance types of worker nodes. 
      description = "The ecs instance types used to launch worker nodes."
      default     = ["ecs.g6.2xlarge", "ecs.g6.xlarge"]
    }
    
    # Password for Worker nodes
    variable "password" {
      description = "The password of ECS instance."
      default     = "Test123456"
    }
    
    # Cluster Addons
    variable "cluster_addons" {    # Specify the add-ons to be installed in the ACK managed cluster. You need to specify the name and configuration of each add-on that you want to install. 
      type = list(object({
        name   = string
        config = string
      }))
    
      default = [
        {
          "name"   = "terway-eniip",
          "config" = "",
        },
        {
          "name"   = "logtail-ds",
          "config" = "{\"IngressDashboardEnabled\":\"true\"}",
        },
        {
          "name"   = "nginx-ingress-controller",
          "config" = "{\"IngressSlbNetworkType\":\"internet\"}",
        },
        {
          "name"   = "arms-prometheus",
          "config" = "",
        },
        {
          "name"   = "ack-node-problem-detector",
          "config" = "{\"sls_project_name\":\"\"}",
        },
        {
          "name"   = "csi-plugin",
          "config" = "",
        },
        {
          "name"   = "csi-provisioner",
          "config" = "",
        }
      ]
    }
    Note

    Make sure that the same region is specified in the main.tf and variable.tf files.

Use Terraform to create an ACK managed cluster (Terway)

  1. The working directory and the configuration file named main.tf described in Prerequisites are created.

    Note

    The main.tf file is used to create and configure ACK cluster resources on Alibaba Cloud. During the resource creation process, the main.tf file references variables defined in the variables.tf file.

    The main.tf file is used to configure the following settings for Terraform:

    • Create a VPC, three vSwitches in the VPC, and three pod vSwitches.

    • Create an ACK managed cluster.

    • Create a managed node pool that consists of two nodes.

    • Create a managed node pool.

    View the content of the main.tf file

    provider "alicloud" {
      region = "cn-shenzhen"
      # Make sure that the same region is specified in the main.tf and variable.tf files.
    }
    
    variable "k8s_name_prefix" {  # Specify the prefix of the name of the ACK managed cluster. 
      description = "The name prefix used to create managed kubernetes cluster."
      default     = "tf-ack-shenzhen"
    }
    
    resource "random_uuid" "this" {} 
    # The default resource names. 
    locals {
      k8s_name_terway         = substr(join("-", [var.k8s_name_prefix, "terway"]), 0, 63)
      k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix, "flannel"]), 0, 63)
      k8s_name_ask            = substr(join("-", [var.k8s_name_prefix, "ask"]), 0, 63)
      new_vpc_name            = "tf-vpc-172-16"
      new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
      new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
      new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
      nodepool_name           = "default-nodepool"
      managed_nodepool_name   = "managed-node-pool"
      autoscale_nodepool_name = "autoscale-node-pool"
      log_project_name        = "log-for-${local.k8s_name_terway}"
    }
    # The ECS instance specifications of worker nodes. Terraform searches for ECS instance types that fulfill the CPU and memory requests. 
    data "alicloud_instance_types" "default" {
      cpu_core_count       = 8
      memory_size          = 32
      availability_zone    = var.availability_zone[0]
      kubernetes_node_role = "Worker"
    }
    # The zones in which ECS instances meet the specification requirements. 
    data "alicloud_zones" "default" {
      available_instance_type = data.alicloud_instance_types.default.instance_types[0].id
    }
    # The VPC. 
    resource "alicloud_vpc" "default" { 
      vpc_name   = local.new_vpc_name
      cidr_block = "172.16.0.0/12"
    }
    # The node vSwitches. 
    resource "alicloud_vswitch" "vswitches" { 
      count      = length(var.node_vswitch_ids) > 0 ? 0 : length(var.node_vswitch_cidrs)
      vpc_id     = alicloud_vpc.default.id
      cidr_block = element(var.node_vswitch_cidrs, count.index)
      zone_id    = element(var.availability_zone, count.index)
    }
    # The pod vSwitches. 
    resource "alicloud_vswitch" "terway_vswitches" {
      count      = length(var.terway_vswitch_ids) > 0 ? 0 : length(var.terway_vswitch_cidrs)
      vpc_id     = alicloud_vpc.default.id
      cidr_block = element(var.terway_vswitch_cidrs, count.index)
      zone_id    = element(var.availability_zone, count.index)
    }
    # The ACK managed cluster. 
    resource "alicloud_cs_managed_kubernetes" "default" {
      name               = local.k8s_name_terway     # The name of the cluster. 
      cluster_spec       = "ack.pro.small"           # Create an ACK Pro cluster. 
      version            = "1.28.9-aliyun.1"
      worker_vswitch_ids = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitches of the node pool. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone. 
      pod_vswitch_ids    = split(",", join(",", alicloud_vswitch.terway_vswitches.*.id)) # The pod vSwitches. 
      new_nat_gateway    = true               # Specify whether to create a NAT gateway when the system creates the Kubernetes cluster. Default value: true. 
      service_cidr       = "10.11.0.0/16"     # The pod CIDR block. If you set cluster_network_type to flannel, this parameter is required. The pod CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the pod CIDR block after the cluster is created. Maximum number of hosts in the cluster: 256.
      slb_internet_enabled = true             # Specify whether to create an Internet-facing SLB instance for the API server of the cluster. Default value: false. 
      enable_rrsa        = true
      control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"]  # The control plane logs.
    
      dynamic "addons" {    # The add-ons. 
        for_each = var.cluster_addons
        content {
          name   = lookup(addons.value, "name", var.cluster_addons)
          config = lookup(addons.value, "config", var.cluster_addons)
        }
      }
    }
    
    resource "alicloud_cs_kubernetes_node_pool" "default" {      # The regular node pool. 
      cluster_id = alicloud_cs_managed_kubernetes.default.id     # The name of the cluster. 
      node_pool_name = local.nodepool_name                       # The name of the node pool. 
      vswitch_ids = split(",", join(",", alicloud_vswitch.vswitches.*.id))  # The vSwitches of the node pool. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone.
      instance_types       = var.worker_instance_types
      instance_charge_type = "PostPaid"
      runtime_name    = "containerd"
      runtime_version = "1.6.20"
      desired_size = 2                       # The expected number of nodes in the node pool. 
      password = var.password                # The password that is used to log on to the cluster by using SSH. 
      install_cloud_monitor = true           # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
      system_disk_category = "cloud_efficiency"
      system_disk_size     = 100
      image_type = "AliyunLinux"
    
      data_disks {     # The configuration of the data disks of nodes. 
        category = "cloud_essd"   # The disk type. 
        size = 120      # The disk size. 
      }
    } 
    
    resource "alicloud_cs_kubernetes_node_pool" "managed_node_pool" {      # The managed node pool.
      cluster_id = alicloud_cs_managed_kubernetes.default.id               # The name of the cluster. 
      node_pool_name = local.managed_nodepool_name                         # The name of the node pool.
      vswitch_ids = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitches of the node pool. Specify one or more vSwitch IDs. The vSwitches must reside in the zone specified by availability_zone.
      desired_size = 0         # The expected number of nodes in the node pool.
    
      management {
        auto_repair     = true
        auto_upgrade    = true
        max_unavailable = 1
      }
    
      instance_types       = var.worker_instance_types
      instance_charge_type = "PostPaid"
      runtime_name    = "containerd"
      runtime_version = "1.6.20"
      password = var.password
      install_cloud_monitor = true
      system_disk_category = "cloud_efficiency"
      system_disk_size     = 100
      image_type = "AliyunLinux"
    
      data_disks {
        category = "cloud_essd"
        size = 120
      }
    }
    
    resource "alicloud_cs_kubernetes_node_pool" "autoscale_node_pool" {
      cluster_id = alicloud_cs_managed_kubernetes.default.id
      node_pool_name = local.autoscale_nodepool_name
      vswitch_ids = split(",", join(",", alicloud_vswitch.vswitches.*.id))
    
      scaling_config {
        min_size = 1
        max_size = 10
      }
    
      instance_types = var.worker_instance_types
      runtime_name    = "containerd"
      runtime_version = "1.6.20"
      password = var.password                # The password that is used to log on to the cluster by using SSH. 
      install_cloud_monitor = true           # Specify whether to install the CloudMonitor agent on the nodes in the cluster.
      system_disk_category = "cloud_efficiency"
      system_disk_size     = 100
      image_type = "AliyunLinux3"
    
      data_disks {                           # The configuration of the data disks of nodes.
        category = "cloud_essd"              # The disk type. 
        size = 120                           # The disk size. 
      }
    }
    

    For more information about the parameters for creating an ACK managed cluster, see alicloud_cs_managed_kubernetes.

  2. Run the following command to initialize the environment for Terraform:

    terraform init

    If the following information is returned, Terraform is initialized:

    Initializing the backend...
    
    Initializing provider plugins...
    - Checking for available provider plugins...
    - Downloading plugin for provider "alicloud" (hashicorp/alicloud) 1.90.1...
    ...
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
  3. Run the following command to create an execution plan:

    terraform plan

    If the following information is returned, the execution plan is created:

    Refreshing Terraform state in-memory prior to plan...
    The refreshed state will be used to calculate this plan, but will not be
    persisted to local or remote state storage.
    ...
    Plan: 12 to add, 0 to change, 0 to destroy.
    ...
  4. Run the following command to create the cluster:

    terraform apply

    When the following information is returned, input yes and press Enter. Then, the cluster is created.

    ...
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    ...
    alicloud_cs_managed_kubernetes.default: Creation complete after 8m26s [id=************]
    
    Apply complete! Resources: 12 added, 0 changed, 0 destroyed.