All Products
Search
Document Center

Alibaba Cloud Service Mesh:Pod stuck in init crash state after sidecar proxy injection

Last Updated:Mar 11, 2026

After sidecar proxy injection, a pod may enter the Init:Error or Init:CrashLoopBackOff state. This happens when Docker cleanup operations remove the exited istio-init container, and the restarted container fails to apply iptables rules that already exist.

Symptom

Run kubectl get pod to check pod status. Affected pods display one of the following states:

NAME                 READY   STATUS                    RESTARTS   AGE
details-v1-u****     0/2     Init:Error                1          12h
productpage-n****    0/2     Init:CrashLoopBackOff     3          12h

To get more details, run kubectl describe pod on the affected pod:

kubectl describe pod <pod-name> -n <namespace>

Look for events related to the istio-init container restart or failure.

Then check the istio-init container logs for the specific error:

kubectl --kubeconfig=${USER_KUBECONFIG} -c istio-init logs ${pod}

The logs show an iptables-restore failure:

......
-A ISTIO_OUTPUT -d 127.0.**.**/32 -j RETURN
-A ISTIO_OUTPUT -d 192.168.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2022-03-23T06:42:21.179567Z    info    Running command: iptables-restore --noflush /tmp/iptables-rules-1648017741179373856.txt4205119933
2022-03-23T06:42:21.185698Z    error   Command error output: xtables other problem: line 2 failed
2022-03-23T06:42:21.185720Z    error   Failed to execute: iptables-restore --noflush /tmp/iptables-rules-1648017741179373856.txt4205119933, exit status 1

The key indicator is Failed to execute: iptables-restore in the output.

Cause

The istio-init container sets up iptables rules that redirect pod network traffic through the Istio sidecar proxy. It runs once during pod initialization, creates the required iptables rules, and then exits.

The crash occurs through the following sequence:

  1. The istio-init container runs, applies iptables rules, and exits normally.

  2. A Docker cleanup command (docker container rm, docker container prune, or docker system prune) removes the exited istio-init container.

  3. Kubernetes detects the missing container and restarts istio-init.

  4. The restarted container attempts to apply the same iptables rules, but the rules from the first run still exist.

  5. iptables-restore fails with a duplicate rule error, and the pod enters a crash loop.

Common triggers:

  • Running docker system prune manually on a node

  • A scheduled cron job that cleans up Docker resources

  • Running docker container rm or docker container prune without filtering out Kubernetes-managed containers

Solution

Recover the affected pod

Delete the affected pod. The deployment controller creates a replacement pod with a fresh istio-init container that applies iptables rules without conflicts.

kubectl delete pod <pod-name> -n <namespace>

Prevent recurrence

Exclude the istio-init container from Docker cleanup commands by adding a label filter.

For manual cleanup, run:

docker system prune --filter "label!=io.kubernetes.container.name=istio-init"

For scheduled tasks, replace any docker system prune command in your cron job scripts with the filtered version:

# Before (causes init crash)
docker system prune

# After (excludes istio-init)
docker system prune --filter "label!=io.kubernetes.container.name=istio-init"

This filter only excludes the istio-init container. If your cron job also runs docker container rm or docker container prune, apply the same filter to those commands.