Service Mesh (ASM) allows you to use Container Network Interface (CNI) plug-ins to remove iptables rules from pods. A CNI plug-in does not require that you have elevated Kubernetes role-based access control (RBAC) permissions. This reduces the requirements for user permissions and improves the security of ASM. This topic describes how to enable a CNI plug-in.
Prerequisites
An ASM instance whose version is 1.14.3.86 or later is created. For more information, see Create an ASM instance.
The kubectl client is used to connect to the ASM instance. For more information, see Use kubectl on the control plane to access Istio resources.
Background information
To enable an ASM instance to work as expected, you must inject an Envoy proxy into each pod of the ASM instance. Then, you must use iptables rules to manage the traffic of each pod so that the injected Envoy proxy can redirect the traffic to the specified applications. The iptables rules of each pod belong to the network namespace of the pod. Therefore, changes to the iptables rules of a pod do not affect the other pods on the same node.
By default, the istio-init container is injected into the pods that are deployed in an ASM instance. In addition, iptables rules are configured before other containers in the pods are started. This requires that you have sufficient permissions to deploy containers, including the permissions to deploy the containers that require the NET_ADMIN capability and to reconfigure the network.
ASM allows you to use CNI plug-ins to remove iptables rules from pods. A CNI plug-in does not require that you have elevated Kubernetes RBAC permissions. You can use a CNI plug-in to configure pod traffic redirection in the network setup phase of the pod lifecycle. In this case, pods no longer need to include the istio-init container that requires the NET_ADMIN capability. After a CNI plug-in is enabled, its configuration is added to the existing CNI plug-ins of containers so that the CNI plug-in can be called when the containers are started.
A CNI plug-in identifies pods that require traffic redirection by checking whether the pods meet all of the following conditions:
The namespace of the pod is not contained in the value of the excludeNamespaces parameter.
The pod contains a container named istio-proxy.
The pod contains multiple containers.
The pod has no annotation whose
key
issidecar.istio.io/inject
,.The pod has annotation
sidecar.istio.io/inject
whose value is not false.
Enable a CNI plug-in
You can enable a CNI plug-in for an ASM instance in the ASM console. Perform the following steps:
Log on to the ASM console.
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose .
On the ASM CNI Plug-in page, turn on Enable Grid CNI Plugin, select the namespaces that you want to exclude, and then click Update Settings.
Pods in the excluded namespaces use the istio-init container rather than the CNI plug-in for network configuration. When the value in the Status column that corresponds to the ASM instance changes from Updating to Running, the CNI plug-in is enabled.
Verify iptables rules
In this example, a bookinfo application is deployed in a cluster to check whether iptables rules take effect.
Create bookinfo.yaml file that contains the following content:
Run the following command to deploy the bookinfo application:
kubectl apply -f bookinfo.yaml
Run the following commands to obtain the ID of the container and the name of the node on which the pod of the Productpage service runs:
ns=default podname=kubectl get pod |grep productpage # Run the following command if the container runtime is Docker: container_id=$(kubectl get pod -n ${ns} ${podname} -o jsonpath="{.status.containerStatuses[?(@.name=='istio-proxy')].containerID}" | sed -n 's/docker:\/\/\(.*\)/\1/p') # Run the following command if the container runtime is Containerd: container_id=$(kubectl get pod -n ${ns} ${podname} -o jsonpath="{.status.containerStatuses[?(@.name=='istio-proxy')].containerID}" | sed -n 's/containerd:\/\/\(.*\)/\1/p') echo $container_id # Obtain the name of the node. kubectl get pod ${podname} -o jsonpath="{.spec.nodeName}"
Log on to the node on which the pod of the Productpage service runs. For example, you can run the SSH command. After logon, run the following command to obtain the process that corresponds to the ID of the container:
# Run the following command if the container runtime is Docker: docker inspect --format '{{ .State.Pid }}' $container_id # Run the following command if the container runtime is Containerd: crictl inspect $container_id|jq ".info.pid"
Run the following command to go to the network namespace of the Productpage container and obtain the current configuration:
nsenter -t $ -n iptables -L -t nat -n -v --line-numbers -x