The container network plug-ins used in a hybrid cluster consist of two parts: the network plug-ins that run in the data center and the network plug-ins that run on cloud compute nodes. This topic describes how to deploy and configure Terway in a hybrid cluster.
Prerequisites
Terway parameters are configured when you create a registered cluster in the following scenarios: Scenario 2: The data center uses a BGP network for container networking and Scenario 3: The data center uses the host network for container networking.
IPVLAN is set based on your business requirements.
Pod vSwitches are specified.
The Service CIDR block is specified.
For more information, see Create a registered cluster.
Scenario 1: The data center uses an overlay network for container networking
In this scenario, the data center uses an overlay network for container networking. Cloud compute nodes can also use this network mode. You need to only make sure that the cloud compute nodes can pull the container image used by the DaemonSet of the container network plug-in.
The following overlay network modes are commonly used:
Flannel VXLAN
Calico IPIP
Cilium VXLAN
Scenario 2: The data center uses a BGP network for container networking
In this scenario, the data center uses a Border Gateway Protocol (BGP) network for container networking. You must use the Terway network plug-in on cloud compute nodes. For more information about how to connect on-premises networks and the cloud, see Configure and manage BGP.
In this scenario, make sure that the following conditions are met:
The DaemonSet of the on-premises container network plug-in, such as the BGP route reflector in Calico, is not scheduled to cloud compute nodes.
The DaemonSet of the Terway network plug-in is not scheduled to on-premises compute nodes.
Each compute node that is added from a node pool in a registered cluster has the alibabacloud.com/external=true
label. You can use this label to distinguish cloud compute nodes from on-premises compute nodes.
For example, you can configure node affinity
to ensure that the DaemonSet of the on-premises Calico network plug-in is not scheduled to nodes that have the alibabacloud.com/external=true
label. You can use the same method to ensure that other on-premises workloads are not scheduled to cloud compute nodes. Run the following command to update the Calico network plug-in:
cat <<EOF > calico-ds.pactch
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: alibabacloud.com/external
operator: NotIn
values:
- "true"
EOF
kubectl -n kube-system patch ds calico-node -p "$(cat calico-ds.pactch)"
By default, the DaemonSet of Terway is scheduled to nodes that have the alibabacloud.com/external=true
label.
Scenario 3: The data center uses the host network for container networking
In this scenario, the data center uses the host network for container networking. You need to only make sure that the DaemonSet of the Terway network plug-in is not scheduled to on-premises compute nodes. By default, the DaemonSet of the Terway network plug-in is scheduled only to nodes that have the alibabacloud.com/external=true
label.
Install and configure the Terway network plug-in
In Scenario 2 and Scenario 3, you must install and configure the Terway network plug-in on the cloud compute nodes of the hybrid cluster.
Step 1: Grant permissions to the Terway network plug-in
Use the RAM console
Create a RAM user and attach the following policy to the RAM user. For more information, see Use RAM to authorize access to clusters and cloud resources.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Secrets page, click Create from YAML. Fill in the following sample code to create a Secret named alibaba-addon-secret.
NoteComponents access cloud services using the stored AccessKeyID and AccessKeySecret. Skip this step If an
alibaba-addon-secret
already exists.apiVersion: v1 kind: Secret metadata: name: alibaba-addon-secret namespace: kube-system type: Opaque stringData: access-key-id: <AccessKeyID of the RAM user> access-key-secret: <AccessKeySecret of the RAM user>
Use onectl
Install onectl on your on-premises machine. For more information, see Use onectl to manage registered clusters.
Run the following command to grant Resource Access Management (RAM) permissions to Terway:
onectl ram-user grant --addon terway-eniip
Expected output:
Ram policy ack-one-registered-cluster-policy-terway-eniip granted to ram user ack-one-user-ce313528c3 successfully.
Step 2: Install the Terway plug-in
Use the ACK console
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose .
On the Add-ons page, click the Networking tab. Select the terway-eniip component and then click Install.
NoteNetworkPolicy is disabled by default. To enable this policy, select Enables NetworkPolicy in the installation page and complete additional configurations. For more information, see Enable the NetworkPolicy feature of Terway (Optional).
Use onectl
Run the following command to install the Terway plug-in:
onectl addon install terway-eniip
Expected output:
Addon terway-eniip, version **** installed.
Step 3: Configure Terway plug-in
Run the following command to modify the eni-config ConfigMap, and configure the eni_conf.access_key
and eni_conf.access_secret
parameters:
kubectl -n kube-system edit cm eni-config
The following sample code provides an example of the eni-config ConfigMap:
kind: ConfigMap
apiVersion: v1
metadata:
name: eni-config
namespace: kube-system
data:
eni_conf: |
{
"version": "1",
"max_pool_size": 5,
"min_pool_size": 0,
"vswitches": {"AZoneID":["VswitchId"]},
"eni_tags": {"ack.aliyun.com":"{{.ClusterId}}"},
"service_cidr": "{{.ServiceCIDR}}",
"security_group": "{{.SecurityGroupId}}",
"access_key": "",
"access_secret": "",
"vswitch_selection_policy": "ordered"
}
10-terway.conf: |
{
"cniVersion": "0.3.0",
"name": "terway",
"type": "terway"
}
You can use a kubeconfig file to connect to the registered cluster and query the DaemonSet that is created for the Terway network plug-in. Before cloud compute nodes are added to the hybrid cluster, the DaemonSet is not scheduled to on-premises compute nodes.
Run the following command to query the Terway network:
kubectl -nkube-system get ds |grep terway
Expected output:
terway-eniip 0 0 0 0 0 alibabacloud.com/external=true 16s
Enable the NetworkPolicy feature of Terway (Optional)
By default, the NetworkPolicy feature of Terway is disabled in a registered cluster. For more information, see Use network policies in ACK clusters.
If you do not need to enable the NetworkPolicy, skip this step.
If you enabled NetworkPolicy, you must determine whether to install CustomResourceDefinitions (CRDs) based on your business requirements. The following YAML template provides examples of CRDs.
ImportantThe NetworkPolicy feature of Terway is dependent on the CRD related to Calico. If you enable the NetworkPolicy feature of Terway in a cluster that uses Calico, errors may occur in the existing Calico networks. If you have any questions, submit a ticket.