Ingress gateways are the traffic entry point for services in a Service Mesh (ASM) instance. If a gateway pod fails and no other pods are available, your services become unreachable. Distributing gateway pods across multiple nodes or availability zones prevents a single failure from taking down the entire ingress path.
The configuration approach depends on your cluster type:
| Cluster type | HA mechanism | Why |
|---|---|---|
| ACK cluster | Pod anti-affinity | Spreads gateway pods across nodes or zones using Kubernetes scheduling rules |
| ACK Serverless cluster | ECI pod annotations | Distributes pods across zones (pod anti-affinity is not supported in serverless clusters) |
Prerequisites
An ASM instance is created. For more information, see Create an ASM instance.
A Container Service for Kubernetes (ACK) cluster or an ACK Serverless cluster. For more information, see Create an ACK managed cluster or Create an ACK Serverless cluster.
Spread pods with anti-affinity (ACK clusters)
In an ACK cluster, add a podAntiAffinity rule to the IstioGateway YAML to prevent the scheduler from placing multiple gateway pods on the same node or in the same zone.
Spread pods across nodes
Set topologyKey to kubernetes.io/hostname so each node runs at most one gateway pod.
The following IstioGateway YAML adds the affinity block for node-level anti-affinity. The anti-affinity fields are explained in the table below.
apiVersion: istio.alibabacloud.com/v1beta1
kind: IstioGateway
metadata:
name: ingressgateway-1
namespace: istio-system
spec:
clusterIds:
- "c954ee9df88f64f229591f0ea4c61****"
cpu:
targetAverageUtilization: 80
externalTrafficPolicy: Local
maxReplicas: 4
minReplicas: 2
ports:
- name: status-port
port: 15020
targetPort: 15020
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 80
- name: tls
port: 15443
targetPort: 15443
replicaCount: 1
resources:
limits:
cpu: '2'
memory: 2G
requests:
cpu: 200m
memory: 256Mi
sds:
enabled: true
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
serviceType: LoadBalancer
# --- Pod anti-affinity: spread pods across nodes ---
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- istio-ingressgateway-1
topologyKey: kubernetes.io/hostname # One pod per node
weight: 100
rollingMaxSurge: "100%"
rollingMaxUnavailable: "25%"Anti-affinity fields:
| Field | Value | Effect |
|---|---|---|
preferredDuringSchedulingIgnoredDuringExecution | (soft rule) | The scheduler tries to honor this rule but still schedules the pod if no compliant node is available. |
matchExpressions | key: app, operator: In, values: [istio-ingressgateway-1] | Matches pods labeled app=istio-ingressgateway-1. The scheduler avoids placing a new pod on a node that already runs a pod with this label. |
topologyKey | kubernetes.io/hostname | Defines the failure domain as individual nodes. Each node runs at most one gateway pod. |
weight | 100 | Sets the priority of this rule. Higher values make the rule more influential when the scheduler evaluates candidate nodes. |
Spread pods across zones
To protect against zone-level failures, change topologyKey to topology.kubernetes.io/zone. The rest of the configuration is identical.
Replace the affinity block with:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- istio-ingressgateway-1
topologyKey: topology.kubernetes.io/zone # One pod per zone
weight: 100With this setting, the scheduler distributes gateway pods so that each availability zone runs at most one pod labeled app=istio-ingressgateway-1.
Distribute pods across zones with ECI annotations (ACK Serverless clusters)
ACK Serverless clusters run pods as Elastic Container Instance (ECI) pods and do not support the standard Kubernetes pod anti-affinity mechanism. Use ECI-specific pod annotations to distribute gateway pods across availability zones instead.
Step 1: Configure multiple zones
Set up multiple vSwitches in different zones within your ACK Serverless cluster. For instructions, see Create ECIs across zones.
Step 2: Add zone annotations to the ingress gateway
In the IstioGateway YAML, add a podAnnotations section that specifies the vSwitch IDs and the scheduling strategy. The following example distributes ECI pods randomly across the specified zones.
apiVersion: istio.alibabacloud.com/v1beta1
kind: IstioGateway
metadata:
name: ingressgateway
namespace: istio-system
spec:
clusterIds:
- "c954ee9df88f64f229591f0ea4c61****"
cpu:
targetAverageUtilization: 80
externalTrafficPolicy: Local
maxReplicas: 4
minReplicas: 2
ports:
- name: status-port
port: 15020
targetPort: 15020
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 80
- name: tls
port: 15443
targetPort: 15443
replicaCount: 1
resources:
limits:
cpu: '2'
memory: 2G
requests:
cpu: 200m
memory: 256Mi
sds:
enabled: true
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
serviceType: LoadBalancer
# --- ECI zone distribution annotations ---
podAnnotations:
k8s.aliyun.com/eci-vswitch: "vsw-bp1b07j0miob3khtn****,vsw-bp12b85hh323se8ft****" # vSwitch IDs in different zones
k8s.aliyun.com/eci-schedule-strategy: "VSwitchRandom" # Distribute pods randomly across zones
rollingMaxSurge: "100%"
rollingMaxUnavailable: "25%"Annotation reference:
| Annotation | Description |
|---|---|
k8s.aliyun.com/eci-vswitch | Comma-separated vSwitch IDs. Each vSwitch belongs to a different zone within the same virtual private cloud (VPC). Replace the example IDs with your own vSwitch IDs. |
k8s.aliyun.com/eci-schedule-strategy | Scheduling strategy for ECI pods. Set to VSwitchRandom to distribute pods randomly across the specified zones. |