This topic describes the issue in which a pod is in the init crash
state after a sidecar proxy is injected into the pod. This topic also describes the
cause of the issue and provides a solution.
Problem description
After you run the following command to check the status of pods, you find that a pod
into which a sidecar proxy is injected is in the init crash
state:
kubectl get pod
The system displays information similar to the following output:
NAME READY STATUS RESTARTS AGE
details-v1-u**** 0/2 Init:Error 1 12h
productpage-n**** 0/2 Init:CrashLoopBackOff 3 12h
Then, you run the following command to check the logs of the istio-init container:
kubectl --kubeconfig=${USER_KUBECONFIG} -c istio-init logs ${pod}
The system displays information similar to the following output:
......
......
-A ISTIO_OUTPUT -d 127.0.**.**/32 -j RETURN
-A ISTIO_OUTPUT -d 192.168.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2022-03-23T06:42:21.179567Z info Running command: iptables-restore --noflush /tmp/iptables-rules-1648017741179373856.txt4205119933
2022-03-23T06:42:21.185698Z error Command error output: xtables other problem: line 2 failed
2022-03-23T06:42:21.185720Z error Failed to execute: iptables-restore --noflush /tmp/iptables-rules-1648017741179373856.txt4205119933, exit status 1
The Failed to execute: iptables-restore
error message is recorded in the logs of the istio-init container.
Cause
Check whether you have cleaned up the istio-init container after you exit the istio-init
container by running a command such as docker container rm/docker container prune/docker system prune
, or whether a scheduled task is executed to clean up the istio-init container.
If you clean up the istio-init container after you exit the istio-init container, Kubernetes detects that the istio-init container associated with the pod is removed. In this case, Kubernetes restarts the removed container. However, the newly started istio-init container cannot execute a new iptables rule because an iptables rule has been created before. As a result, no iptables rule can be configured for the newly started istio-init container, and the istio-init container crashes.
Solution
To resolve the issue, recreate the pod. After the pod is recreated, the pod recovers to the normal state.
- If you run a command to batch clean up data, you must filter out the istio-init container
in the command to prevent the istio-init container from being cleaned up.
docker system prune --filter "label!=io.kubernetes.container.name=istio-init"
- If you run a scheduled task to clean up data, you must replace the
docker system prune
command with the following command in the script of the scheduled task to filter out the istio-init container. This prevents the istio-init container from being cleaned up.docker system prune --filter "label!=io.kubernetes.container.name=istio-init"