To add nodes to a hybrid cluster, you must consider how the self-managed Kubernetes cluster is created. For example, the cluster may be created by using kubeadm, Kubernetes binaries, or Rancher. This topic describes how to create a script to add nodes to a hybrid cluster.
Prerequisites
The operations before Step 5 in the Build a hybrid cloud cluster and add ECS instances to the cluster topic are completed.
Step 1: Create a script to add cluster nodes
You can use one of the following methods to create a script to add cluster nodes.
Method 1: Use Kubernetes binaries
The following code of kubelet configuration is provided as an example:
cat >/usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/data0/kubernetes/bin/kubelet \\
--node-ip=${ALIBABA_CLOUD_NODE_NAME} \\
--hostname-override=${ALIBABA_CLOUD_NODE_NAME} \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \\
--config=/var/lib/kubelet/config.yaml \\
--kubeconfig=/etc/kubernetes/kubelet.conf \\
--cert-dir=/etc/kubernetes/pki/ \\
--cni-bin-dir=/opt/cni/bin \\
--cni-cache-dir=/opt/cni/cache \\
--cni-conf-dir=/etc/cni/net.d \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes/logs \\
--log-file=/var/log/kubernetes/logs/kubelet.log \\
--node-labels=${ALIBABA_CLOUD_LABELS} \\
--root-dir=/var/lib/kubelet \\
--provider-id=${ALIBABA_CLOUD_PROVIDER_ID} \\
--register-with-taints=${ALIBABA_CLOUD_TAINTS} \\
--v=4
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
When you write the script, you must use the system environment variables that are provided by the external cluster registered in the Container Service for Kubernetes (ACK) console. The following table lists the required system environment variables.
System environment variable | Description | Example |
ALIBABA_CLOUD_PROVIDER_ID | You must set this variable in the script. Otherwise, errors may occur during cluster management. |
|
ALIBABA_CLOUD_NODE_NAME | You must set this variable in the script. Otherwise, nodes in the node pool may have abnormal states. |
|
ALIBABA_CLOUD_LABELS | You must set this variable in the script. Otherwise, errors may occur during node pool management and workload scheduling between cloud and on-premises nodes. |
The workload=cpu label is a custom label defined in the node pool configuration. Other labels are system labels. |
ALIBABA_CLOUD_TAINTS | You must set this variable in the script. Otherwise, the taints that are added to the node pool do not take effect. |
|
Method 2: Use kubeadm
The following node initialization script is used to add nodes when you use kubeadm to initialize self-managed Kubernetes clusters.
#!/bin/bash
# Uninstall the earlier version of Docker.
yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
# Specify a Yellowdog Updater, Modified (YUM) repository.
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Install and start Docker.
yum install -y docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io-1.4.3 conntrack
# Restart Docker.
systemctl enable docker
systemctl restart docker
# Disable swap.
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
# Modify the /etc/sysctl.conf file.
# If the /etc/sysctl.conf file exists, modify the file based on the following content:
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
# If the /etc/sysctl.conf file does not exist, create a file named /etc/sysctl.conf and copy the following content to the file:
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
# Run the following command to apply the modifications.
sysctl -p
# Specify the source YUM repository for Kubernetes.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# Uninstall the earlier versions of kubelet, kubeadm, and kubectl.
yum remove -y kubelet kubeadm kubectl
# Install kubelet, kubeadm, and kubectl.
yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4
# Restart Docker and start kubelet.
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet
kubeadm join --token 2q3s0u.w3d10wtsndqj**** 172.16.0.153:XXXX --discovery-token-unsafe-skip-ca-verification
You must specify the following environment variables in the node initialization script: ALIBABA_CLOUD_PROVIDER_ID, ALIBABA_CLOUD_LABELS, ALIBABA_CLOUD_NODE_NAME, and ALIBABA_CLOUD_TAINTS. The preceding environment variables specify information about the ACK control plane. The following code provides an example:
#!/bin/bash
# Uninstall the earlier version of Docker.
yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
# Specify a YUM repository.
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Install and start Docker.
yum install -y docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io-1.4.3 conntrack
# Restart Docker.
systemctl enable docker
systemctl restart docker
# Disable swap.
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
# Modify the /etc/sysctl.conf file.
# If the /etc/sysctl.conf file exists, modify the file based on the following content:
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
# If the /etc/sysctl.conf file does not exist, create a file named /etc/sysctl.conf and copy the following content to the file:
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
# Run the following command to apply the modifications.
sysctl -p
# Specify the source YUM repository for Kubernetes.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# Uninstall the earlier versions of kubelet, kubeadm, and kubectl.
yum remove -y kubelet kubeadm kubectl
# Install kubelet, kubeadm, and kubectl.
yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4
# Configure the node labels, taints, node name, and node provider ID.
KUBEADM_CONFIG_FILE="/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf"
if [[ $ALIBABA_CLOUD_LABELS != "" ]];then
option="--node-labels"
if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_LABELS},@g" $KUBEADM_CONFIG_FILE
elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS} @g" $KUBEADM_CONFIG_FILE
else
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS}\"" $KUBEADM_CONFIG_FILE
fi
fi
if [[ $ALIBABA_CLOUD_TAINTS != "" ]];then
option="--register-with-taints"
if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_TAINTS},@g" $KUBEADM_CONFIG_FILE
elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS} @g" $KUBEADM_CONFIG_FILE
else
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS}\"" $KUBEADM_CONFIG_FILE
fi
fi
if [[ $ALIBABA_CLOUD_NODE_NAME != "" ]];then
option="--hostname-override"
if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_NODE_NAME},@g" $KUBEADM_CONFIG_FILE
elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME} @g" $KUBEADM_CONFIG_FILE
else
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME}\"" $KUBEADM_CONFIG_FILE
fi
fi
if [[ $ALIBABA_CLOUD_PROVIDER_ID != "" ]];then
option="--provider-id"
if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_PROVIDER_ID},@g" $KUBEADM_CONFIG_FILE
elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID} @g" $KUBEADM_CONFIG_FILE
else
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID}\"" $KUBEADM_CONFIG_FILE
fi
fi
# Restart Docker and start kubelet.
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet
kubeadm join --node-name $ALIBABA_CLOUD_NODE_NAME --token 2q3s0u.w3d10wtsndqj**** 172.16.0.153:XXXX --discovery-token-unsafe-skip-ca-verification
Step 2: Save the script
Save the script to an HTTP file server, such as an Object Storage Service (OSS) bucket. The https://kubelet-****.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh
address is used as an example.
Step 3: Use the script
Register the self-managed Kubernetes cluster in the ACK console. For more information, see Create a registered cluster in the ACK console.
The cluster registration proxy automatically creates a ConfigMap named ack-agent-config in the kube-system namespace of the external cluster. The following code block shows the initial configuration of the ack-agent-config ConfigMap:
apiVersion: v1 data: addNodeScriptPath: "" kind: ConfigMap metadata: name: ack-agent-config namespace: kube-system
Add the
https://kubelet-****.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh
address of the script to theaddNodeScriptPath
field and save the modification.Sample command:
apiVersion: v1 data: addNodeScriptPath: https://kubelet-****.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh kind: ConfigMap metadata: name: ack-agent-config namespace: kube-system