All Products
Search
Document Center

Elastic Container Instance:Deploy a VNode in a self-managed Kubernetes cluster to connect to Elastic Container Instance

Last Updated:Aug 08, 2024

To connect a self-managed Kubernetes cluster hosted on an Elastic Compute Service (ECS) instance to Elastic Container Instance, you must deploy a VNode in the cluster. This topic describes how to deploy a VNode in a self-managed Kubernetes cluster in the same virtual private cloud (VPC).

Background information

The integration of Elastic Container Instance with Kubernetes provides a hierarchical solution for Kubernetes resource management. Elastic Container Instance schedules and manages pods in the underlaying architecture, and Kubernetes manages workloads at the platform layer. If you created a self-managed Kubernetes cluster on an ECS instance, you can deploy a VNode in the cluster to use Elastic Container Instance. For more information, see Connect a self-managed Kubernetes cluster to Elastic Container Instance.

Prerequisites

  • A self-managed Kubernetes cluster of version 1.13 to 1.30 is created on an ECS instance by using kubeadm.

  • The Flannel, Calico, or Cilium network plug-in is deployed in the cluster.

Install VNodectl

Elastic Container Instance provides the VNodectl CLI to deploy and manage VNodes. We recommend that you install VNodectl on the master node of the Kubernetes cluster.

  1. Connect to your Kubernetes cluster.

  2. Download the installation package of VNodectl.

    wget https://eci-docs.oss-cn-beijing.aliyuncs.com/vnode/vnodectl_0.0.5-beta_linux_amd64.tar.gz -O vnodectl.tar.gz 
  3. Extract the downloaded package files of VNodectl.

    tar xvf vnodectl.tar.gz 
  4. Copy the extracted package files to a specified directory.

    cp vnodectl /usr/local/bin/vnode

Configure the ~/.vnode/config file

  1. Modify the content of the ~/.vnode/config file.

    vim ~/.vnode/config

    Modify the content of the ~/.vnode/config file based on your business requirements. Example:

    Important
    • The kubeconfig file must have cluster-admin permissions. If you want to decrease the permission scope of the kubeconfig file, see Configure the cluster.

    • Make sure that the apiserver address specified in the kubeconfig file can be accessed by the VNode.

    kind: vnode
    contexts:
        - name: default                                          # The name of the context.
          region-id: cn-hangzhou                                 # The region ID.
          access-key-id: LTAI5tJbBkHcHBUmuP7C****                # The AccessKey ID.
          access-key-secret: 5PlpKJT6sgLcD4f9y5pACNDbEg****      # The AccessKey secret. 
          vswitch-id: vsw-7xv2yk45qp5etidgf****                  # The ID of the vSwitch that is connected to the VNode.
          security-group-id: sg-7xv5tcch4kjdr65t****             # The ID of the security group to which the VNode belongs.
          kubeconfig: /path/to/kubeconfig                        # The kubeconfig file of the cluster.
    current-context: default
  2. Run the VNode to load the configurations in the context.

    vnode config set-context <context-name>

Create a VNode

  1. Create a VNode.

    vnode create

    The following sample code shows a sample output. The value of the VirtualNodeId parameter is the ID of the generated VNode.

    {"RequestId":"AB772F9D-2FEF-5BFD-AAFB-DA3444851F29","VirtualNodeId":"vnd-7xvetkyase7gb62u****"}
  2. View information about nodes.

    kubectl get node

    The following sample code shows a sample output, which indicates that the VNode is deployed in the cluster.

    NAME                                    STATUS     ROLES                  AGE    VERSION
    cn-hangzhou.vnd-7xvetkyase7gb62u****    Ready      agent                  174m   v1.20.6
    vnode-test001                           Ready      control-plane,master   23h    v1.20.6
    vnode-test002                           Ready      <none>                 22h    v1.20.6

Prevent DaemonSets from being scheduled to the VNode

DaemonSets cannot run on VNodes because VNodes are not real nodes. After you create a VNode, you need to modify the DaemonSet in kube-proxy and configure nodeAffinity to prevent DaemonSets from being scheduled to the VNode.

  1. Modify the configurations of the DaemonSet.

    kubectl -n kube-system edit ds kube-proxy
  2. Configure nodeAffinity.

    Add the following YAML content to spec > template > spec:

    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: type
              operator: NotIn
              values:
              - virtual-kubelet

Schedule pods to the VNode

After you create a VNode, you can use one of the following methods to schedule pods to the VNode. Then, you can run the pods as elastic container instances in the VNode.

  • Manual scheduling

    You can configure the nodeSelector and tolerations parameters or specify the nodeName parameter to schedule pods to the VNode. For more information, see Schedule pods to a VNode.

  • Automatic Scheduling

    After you deploy the eci-profile component, you can specify the Selector parameter. This way, the system automatically schedules pods that meet the conditions specified by Selector to the VNode. For more information, see Use eci-profile to schedule pods to a VNode.

Make the pods in the overlay network of the self-managed Kubernetes cluster accessible to Elastic Container Instance-based pods

Pods that are scheduled to the VNode use an elastic network interface (ENI) of the vSwitch in the VPC to which the pods belong. By default, the pods are assigned an internal IP address.

Elastic Container Instance-based pods cannot access the pods in the overlay network (Flannel, Calico, or Cilium) of the self-managed Kubernetes cluster. However, the pods in the overlay network of the self-managed Kubernetes cluster can access the Elastic Container Instance-based pods. If the Elastic Container Instance-based pods need to access the pods in the overlay network of the self-managed cluster, add a route entry to the route table of the VPC to which the Elastic Container Instance-based pods belong. This way, packets from the Elastic Container Instance-based pods can be routed by using the route entry to the corresponding ECS node in the self-managed cluster.

Sample configurations:

  • Sample scenario

    For example, a cluster contains two pods. One pod (test1) runs on the VNode. The other pod (test2) runs on the ECS node . By default, test2 can access test1, but test1 cannot access test2.

    NAME      READY     RESTARTS    AGE    IP                NODE                                   NOMINATED NODE   READINESS NODE
    test1     1/1       0           58s    192.168.0.245     cn-hangzhou.vnd-7xvetkyase7gb62u****   <none>           <none>
    test2     1/1       0           35s    10.88.1.4         vnode-test002                          <none>           <none>
  • Procedure

    1. Log on to the VPC console.

    2. In the left-side navigation pane, click Route Tables.

    3. Switch to the region in which the elastic container instance resides, find the route table of the VPC to which the Elastic Container Instance-based pods belong, and click the ID of the route table.

    4. On the Route Entry List tab, click the Custom Route tab.

    5. Click Add Route Entry.

    6. In the dialog box that appears, configure the route entry and click OK.

      In this example, the following configurations are used:

      • Destination CIDR Block: Enter the CIDR block of the vSwitch that is connected to the ECS node. Example: 10.88.1.0/24.

      • Next Hop Type: Select ECS Instance from the drop-down list.

      • ECS Instance: Select the ECS node.

  • Verify the result

    Run the kubectl exec command to enter a container of test1 and then run the ping command. If you can ping the IP address of test2, the connection between the two pods is established and test1 can access test2.

References