An Application Load Balancer (ALB) Ingress is an API object that provides Layer 7 load balancing to manage external access to Services in a Kubernetes cluster. This topic describes how to use ALB Ingresses to forward requests to backend server groups based on domain names and URL paths, redirect HTTP requests to HTTPS, and implement canary releases.
Table of contents
Prerequisites
An ACK managed cluster or ACK dedicated cluster is created and the cluster runs Kubernetes 1.18 or later. For more information, see Create an ACK managed cluster and Create an ACK dedicated cluster.
Two vSwitches that reside in different zones are created and deployed in the same virtual private cloud (VPC) as the ACK cluster. For more information, see Create and manage a vSwitch.
The ALB Ingress controller is installed in the cluster. For more information, see Manage the ALB Ingress controller.
NoteTo use an ALB Ingress to access Services deployed in an ACK dedicated cluster, you need to first grant the cluster the permissions required by the ALB Ingress controller. For more information, see Authorize an ACK dedicated cluster to access the ALB Ingress controller.
An AlbConfig object is created. For more information, see Create an AlbConfig object.
Forward requests based on domain names
Perform the following steps to create an Ingress with a domain name and an Ingress without a domain name, and then use the Ingresses to forward requests.
Create an Ingress with a domain name
Use the following template to create a Deployment, a Service, and an Ingress. Requests to the domain name of the Ingress are forwarded to the Service.
Clusters that run Kubernetes 1.19 or later
apiVersion: v1 kind: Service metadata: name: demo-service namespace: default spec: ports: - name: port1 port: 80 protocol: TCP targetPort: 8080 selector: app: demo sessionAffinity: None type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: demo namespace: default spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1 imagePullPolicy: IfNotPresent name: demo ports: - containerPort: 8080 protocol: TCP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: demo.domain.ingress.top http: paths: - backend: service: name: demo-service port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: v1 kind: Service metadata: name: demo-service namespace: default spec: ports: - name: port1 port: 80 protocol: TCP targetPort: 8080 selector: app: demo sessionAffinity: None type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: demo namespace: default spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1 imagePullPolicy: IfNotPresent name: demo ports: - containerPort: 8080 protocol: TCP --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: demo.domain.ingress.top http: paths: - backend: serviceName: demo-service servicePort: 80 path: /hello pathType: ImplementationSpecific
Run the following command to access the application by using the specified domain name.
Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the
kubectl get ing
command.curl -H "host: demo.domain.ingress.top" <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Create an Ingress without a domain name
The following template shows the configuration of the Ingress:
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: "" http: paths: - backend: service: name: demo-service port: number: 80 path: /hello
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: "" http: paths: - backend: serviceName: demo-service servicePort: 80 path: /hello pathType: ImplementationSpecific
Run the following command to access the application without using a domain name.
Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the
kubectl get ing
command.curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Forward requests based on URL paths
ALB Ingresses can forward requests based on URL paths. You can use the pathType
parameter to configure different URL match policies. The valid values of pathType
are Exact, ImplementationSpecific, and Prefix.
URL match policies may conflict with each other. When conflicting URL match policies exist, requests are matched against the policies in descending order of priority. For more information, see Configure forwarding rule priorities.
Match mode | Rule | URL path | Whether the URL path matches the rule |
Prefix | / | (All paths) | Yes |
Prefix | /foo |
| Yes |
Prefix | /foo/ |
| Yes |
Prefix | /aaa/bb | /aaa/bbb | No |
Prefix | /aaa/bbb | /aaa/bbb | Yes |
Prefix | /aaa/bbb/ | /aaa/bbb | Yes. The URL path ignores the trailing forward slash (/) of the rule. |
Prefix | /aaa/bbb | /aaa/bbb/ | Yes. The rule matches the trailing forward slash (/) of the URL path. |
Prefix | /aaa/bbb | /aaa/bbb/ccc | Yes. The rule matches the subpath of the URL path. |
Prefix | Configure two rules at the same time:
| /aaa/ccc | Yes. The URL path matches the |
Prefix | Configure two rules at the same time:
| /aaa/ccc | Yes. The URL path matches the |
Prefix | Configure two rules at the same time:
| /ccc | Yes. The URL path matches the |
Prefix | /aaa | /ccc | No |
Exact or ImplementationSpecific | /foo | /foo | Yes |
Exact or ImplementationSpecific | /foo | /bar | No |
Exact or ImplementationSpecific | /foo | /foo/ | No |
Exact or ImplementationSpecific | /foo/ | /foo | No |
You can perform the following steps to configure different URL match policies.
Exact
The following template shows the configuration of the Ingress:
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-path namespace: default spec: ingressClassName: alb rules: - http: paths: - path: /hello backend: service: name: demo-service port: number: 80 pathType: Exact
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo-path namespace: default spec: ingressClassName: alb rules: - http: paths: - path: /hello backend: serviceName: demo-service servicePort: 80 pathType: Exact
Run the following command to access the application.
Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the
kubectl get ing
command.curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
ImplementationSpecific: the default match policy
The ALB Ingress configuration is the same as that for the Exact
match policy.
The following template shows the configuration of the Ingress:
Run the following command to access the application.
Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the
kubectl get ing
command.
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-path
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /hello
backend:
service:
name: demo-service
port:
number: 80
pathType: ImplementationSpecific
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: demo-path
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /hello
backend:
serviceName: demo-service
servicePort: 80
pathType: ImplementationSpecific
curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Prefix
Match a specified prefix against URL paths. The elements in URL paths are separated by forward slashes (/
). The prefix is case-sensitive and matched against each element of the path.
The following template shows the configuration of the Ingress:
Run the following command to access the application.
Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the
kubectl get ing
command.
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-path-prefix
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
backend:
service:
name: demo-service
port:
number: 80
pathType: Prefix
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: demo-path-prefix
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
backend:
serviceName: demo-service
servicePort: 80
pathType: Prefix
curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Configure health checks
You can configure health checks for ALB Ingresses by using the following annotations.
The following YAML template provides an example on how to create an Ingress for which health checks are enabled.
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/healthcheck-enabled: "true"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
alb.ingress.kubernetes.io/healthcheck-httpversion: "HTTP1.1"
alb.ingress.kubernetes.io/healthcheck-method: "HEAD"
alb.ingress.kubernetes.io/healthcheck-code: "http_2xx"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "2"
alb.ingress.kubernetes.io/healthy-threshold-count: "3"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure a context path.
- path: /tea
backend:
service:
name: tea-svc
port:
number: 80
# Configure a context path.
- path: /coffee
backend:
service:
name: coffee-svc
port:
number: 80
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/healthcheck-enabled: "true"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
alb.ingress.kubernetes.io/healthcheck-method: "HEAD"
alb.ingress.kubernetes.io/healthcheck-httpcode: "http_2xx"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "2"
alb.ingress.kubernetes.io/healthy-threshold-count: "3"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure a context path.
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
# Configure a context path.
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
Parameter | Description |
alb.ingress.kubernetes.io/healthcheck-enabled | Specifies whether to enable health checks for backend server groups.
Default value: |
alb.ingress.kubernetes.io/healthcheck-path | The URL path based on which health checks are performed. Default value: |
alb.ingress.kubernetes.io/healthcheck-protocol | The protocol that is used for health checks.
Default value: |
alb.ingress.kubernetes.io/healthcheck-httpversion | The version of the HTTP protocol. This parameter takes effect only when the
Default value: |
alb.ingress.kubernetes.io/healthcheck-method | The request method that is used for health checks.
Default value: Important If you set |
alb.ingress.kubernetes.io/healthcheck-httpcode | The status codes returned for health checks. A backend server is considered healthy only when the health check request is successful and one of the specified status codes is returned. You can select one or more of the following status codes, and separate multiple status codes with commas (,):
Default value: |
alb.ingress.kubernetes.io/healthcheck-code | The status codes returned for health checks. A backend server is considered healthy only when the health check request is successful and one of the specified status codes is returned. If you specify both this parameter and Values for this parameter depend on the value specified in
|
alb.ingress.kubernetes.io/healthcheck-timeout-seconds | The timeout period of a health check. Unit: seconds. Valid values: 1 to 300. Default value: |
alb.ingress.kubernetes.io/healthcheck-interval-seconds | The interval between two consecutive health checks. Unit: seconds. Valid values: 1 to 50. Default value: |
alb.ingress.kubernetes.io/healthy-threshold-count | The number of times that a backend server must consecutively pass health checks before the server is considered healthy. Valid values: 2 to 10. Default value: |
alb.ingress.kubernetes.io/unhealthy-threshold-count | The number of times that a backend server must consecutively fail health checks before the server is considered unhealthy. Valid values: 2 to 10. Default value: |
alb.ingress.kubernetes.io/healthcheck-connect-port | The port that is used for health checks. Default value: Note A value of |
Configure a redirection from HTTP requests to HTTPS requests
You can configure an ALB Ingress to redirect HTTP requests to HTTPS port 443 by adding the annotation alb.ingress.kubernetes.io/ssl-redirect: "true"
.
You cannot create listeners by using an ALB Ingress. To ensure that an ALB Ingress can work as expected, you need to specify the ports and the protocols of listeners in an AlbConfig, and then associate the listeners with Services in the ALB Ingress.
Example:
Clusters that run Kubernetes 1.19 or later
apiVersion: v1
kind: Service
metadata:
name: demo-service-ssl
namespace: default
spec:
ports:
- name: port1
port: 80
protocol: TCP
targetPort: 8080
selector:
app: demo-ssl
sessionAffinity: None
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ssl
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-ssl
template:
metadata:
labels:
app: demo-ssl
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
imagePullPolicy: IfNotPresent
name: demo-ssl
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/ssl-redirect: "true"
name: demo-ssl
namespace: default
spec:
ingressClassName: alb
tls:
- hosts:
- ssl.alb.ingress.top
rules:
- host: ssl.alb.ingress.top
http:
paths:
- backend:
service:
name: demo-service-ssl
port:
number: 80
path: /
pathType: Prefix
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: v1
kind: Service
metadata:
name: demo-service-ssl
namespace: default
spec:
ports:
- name: port1
port: 80
protocol: TCP
targetPort: 8080
selector:
app: demo-ssl
sessionAffinity: None
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ssl
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-ssl
template:
metadata:
labels:
app: demo-ssl
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
imagePullPolicy: IfNotPresent
name: demo-ssl
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/ssl-redirect: "true"
name: demo-ssl
namespace: default
spec:
ingressClassName: alb
tls:
- hosts:
- ssl.alb.ingress.top
rules:
- host: ssl.alb.ingress.top
http:
paths:
- backend:
serviceName: demo-service-ssl
servicePort: 80
path: /
pathType: Prefix
Configure HTTPS or gRPC as the backend protocol
ALB Ingresses support HTTPS or gRPC as the backend protocol. To configure HTTPS or gRPC, add the alb.ingress.kubernetes.io/backend-protocol: "grpc"
or alb.ingress.kubernetes.io/backend-protocol: "https"
annotation. If you want to use an Ingress to distribute requests to a gRPC service, you must configure an SSL certificate for the gRPC service and use the TLS protocol to communicate with the gRPC service. Example:
You cannot modify the backend protocol. If you need to change the protocol, delete and rebuild the Ingress.
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: "grpc"
name: lxd-grpc-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grpc-demo-svc
port:
number: 9080
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: "grpc"
name: lxd-grpc-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: grpc-demo-svc
servicePort: 9080
path: /
pathType: Prefix
Configure regular expressions
You can specify regular expressions as routing conditions in the path
field. Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" # Supports regular expressions in the path field.
alb.ingress.kubernetes.io/conditions.service-a: | # The Service specified in the annotation must be an existing Service in the cluster, and the Service name must be the same as the Service name in backend of the rule field.
[{
"type": "Path",
"pathConfig": {
"values": [
"~*^/pathvalue1", # The regular expression must follow the regular expression flag ~*.
"/pathvalue2" # No need to add ~* before the path for exact match.
]
}
}]
name: ingress-example
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: service-a
port:
number: 88
Configure rewrite rules
ALB Ingresses support rewrite rules. To configure rewrite rules, add the annotation alb.ingress.kubernetes.io/rewrite-target: /path/${2}
. The following rules apply:
In the
rewrite-target
annotation, you must set thepathType
field to Prefix for variables of the${number}
type.By default, the
path
parameter does not support characters that are supported by regular expressions, such as asterisks (*
) and question marks (?
). To specify characters that are used by regular expressions in the path parameter, you must add therewrite-target
annotation.The value of the
path
parameter must start with a forward slash (/
).
If you want to specify regular expressions in rewrite rules, take note of the following items:
You can specify one or more regular expressions in the
path
field of an ALB Ingress and use parentheses()
to enclose the regular expressions. However, you can use up to three variables (${1}
,${2}
, and${3}
) in therewrite-target
annotation to form the path that overwrites the original path.Variables that match the regular expressions are concatenated to form the path that overwrites the original path.
The original path is overwritten by the variables that match the regular expressions only if the following requirements are met: Multiple regular expressions that are enclosed in parentheses
()
are specified and therewrite-target
annotation is set to one or more of the following variables:${1}
,${2}
, and${3}
.
Assume that the path
parameter of an ALB Ingress is set to /sys/(.*)/(.*)/aaa
and the rewrite-target
annotation is set to /${1}/${2}
. If the client sends a request to the /sys/ccc/bbb/aaa
path
, the request matches the regular expression /sys/(.*)/(.*)/aaa
. The rewrite-target
annotation takes effect and replaces ${1}
with ccc
and ${2}
with bbb
. As a result, the request is redirected to /ccc/bbb
.
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" # Supports regular expressions in the path field.
alb.ingress.kubernetes.io/rewrite-target: /path/${2} # Variables that match the regular expressions are concatenated to form the path that overwrites the original path.
name: rewrite-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /something(/|$)(.*)
pathType: Prefix
backend:
service:
name: rewrite-svc
port:
number: 9080
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" # Supports regular expressions in the path field.
alb.ingress.kubernetes.io/rewrite-target: /path/${2} # Variables that match the regular expressions are concatenated to form the path that overwrites the original path.
name: rewrite-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: rewrite-svc
servicePort: 9080
path: /something(/|$)(.*)
pathType: Prefix
Configure custom listening ports
ALB Ingresses allow you to configure custom listening ports. This allows you to expose a service through ports 80 and 443 at the same time.
You cannot create listeners by using an ALB Ingress. To ensure that an ALB Ingress can work as expected, you need to specify the ports and protocols of listeners in an AlbConfig, and then associate the listeners with Services in the ALB Ingress.
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
name: cafe-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
Configure forwarding rule priorities
By default, forwarding rules are prioritized based on the following rules:
Forwarding rules of different ALB Ingresses are prioritized in the lexicographical order of the values of the
namespace/name
parameter. A forwarding rule whose namespace/name value appears the first among all forwarding rules in the lexicographical order has the highest priority.The forwarding rules of an ALB Ingress are displayed in descending order of priority in the
rule
parameter.
If you do not want to use the namespace/name
parameter of an ALB Ingress to prioritize forwarding rules, you can use the following annotation instead:
The priority of each forwarding rule within a listener must be unique. You can use the annotation alb.ingress.kubernetes.io/order
to specify the priorities of the forwarding rules of an ALB Ingress. Valid values: 1 to 1000. A smaller value indicates a higher priority. The default value is 10.
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/order: "2"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/order: "2"
name: cafe-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
Use annotations to perform phased releases
ALB allows you to configure canary releases based on request headers, cookies, and weights to handle complex traffic routing. You can add the annotation alb.ingress.kubernetes.io/canary: "true"
to enable the canary release feature. Then, you can use the following annotations to configure different canary release rules.
Canary releases that use different rules take effect in the following order: header-based > cookie-based > weight-based.
When you perform canary releases to test a new application version, do not modify the original Ingress rules. Otherwise, access to the application may be interrupted. After the new application version passes the test, replace the backend Service used by the earlier application version with the backend Service used by the new application version. Then, delete the Ingress rules for implementing canary releases.
alb.ingress.kubernetes.io/canary-by-header
andalb.ingress.kubernetes.io/canary-by-header-value
: This rule matches the headers and header values of requests. You must add bothannotations
if you want to use this rule.If the
header
andheader value
of a request match the rule, the request is routed to the new application version.If the
header
of a request does not match theheader
-based rule, the request is matched against other types of rules based on the priorities of the rules.
If you set the alb.ingress.kubernetes.io/canary-by-header annotation to
location: hz
, requests that match the rule are routed to the new application version. Requests that fail to match the rule are routed based on weight-based rules. Example:Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "1" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-header: "location" alb.ingress.kubernetes.io/canary-by-header-value: "hz" name: demo-canary namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "1" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-header: "location" alb.ingress.kubernetes.io/canary-by-header-value: "hz" name: demo-canary namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName:demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
alb.ingress.kubernetes.io/canary-by-cookie
: This rule matches the cookies of requests.If you set
cookie
toalways
, requests that match the rule are routed to the new application version.If you set
cookie
tonever
, requests that match the rule are routed to the old application version.
NoteCookie-based canary release rules do not support other settings. The cookie value must be
always
ornever
.Requests that contain the
demo=always
cookie are routed to the new application version. Example:Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "2" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-cookie: "demo" name: demo-canary-cookie namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "2" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-cookie: "demo" name: demo-canary-cookie namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName:demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
alb.ingress.kubernetes.io/canary-weight
: This rule allows you to set the percentage of requests that are sent to a specified Service. You can enter an integer from 0 to 100.In the following example, the percentage of requests that are routed to the new application version is set to 50%:
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "3" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-weight: "50" name: demo-canary-weight namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "3" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-weight: "50" name: demo-canary-weight namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName: demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
Configure session persistence by using annotations
ALB Ingresses allow you to configure session persistence by using the following annotations:
alb.ingress.kubernetes.io/sticky-session
: specifies whether to enable session persistence. Valid values:true
andfalse
. Default value:false
.alb.ingress.kubernetes.io/sticky-session-type
: the method that is used to handle a cookie. Valid values:Insert
andServer
. Default value:Insert
.Insert
: inserts a cookie. ALB inserts a cookie (SERVERID) into the first HTTP or HTTPS response packet that is sent to a client. The next request from the client contains this cookie and the listener distributes this request to the recorded backend server.Server
: rewrites a cookie. When ALB detects a user-defined cookie, it overwrites the original cookie with the user-defined cookie. The next request from the client will contain the user-defined cookie, and the listener will distribute this request to the recorded backend server.
NoteThis parameter takes effect when the
StickySessionEnabled
parameter is set totrue
for the server group.alb.ingress.kubernetes.io/cookie-timeout
: specifies the timeout period of cookies. Valid values: 1 to 86400. Default value:1000
. Unit: seconds.
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress-v3
annotations:
alb.ingress.kubernetes.io/sticky-session: "true"
alb.ingress.kubernetes.io/sticky-session-type: "Insert"
alb.ingress.kubernetes.io/cookie-timeout: "1800"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure a context path.
- path: /tea2
backend:
service:
name: tea-svc
port:
number: 80
# Configure a context path.
- path: /coffee2
backend:
service:
name: coffee-svc
port:
number: 80
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-v3
annotations:
alb.ingress.kubernetes.io/sticky-session: "true"
alb.ingress.kubernetes.io/sticky-session-type: "Insert"
alb.ingress.kubernetes.io/cookie-timeout: "1800"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure a context path.
- path: /tea2
backend:
serviceName: tea-svc
servicePort: 80
# Configure a context path.
- path: /coffee2
backend:
serviceName: coffee-svc
servicePort: 80
Specify a load balancing algorithm for backend server groups
ALB Ingresses allow you to specify a load balancing algorithm for backend server groups by using the annotation alb.ingress.kubernetes.io/backend-scheduler
. Example:
Clusters that run Kubernetes 1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/backend-scheduler: "uch" # Replace uch with wrr, sch, or wlc based on your business requirements.
alb.ingress.kubernetes.io/backend-scheduler-uch-value: "test" # This parameter is required only if the load balancing algorithm is uch. You do not need to configure this parameter if the scheduling algorithm is wrr, sch, or wlc.
name: cafe-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters that run Kubernetes versions earlier than 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-scheduler: "uch" # Replace uch with wrr, sch, or wlc based on your business requirements.
alb.ingress.kubernetes.io/backend-scheduler-uch-value: "test" # This parameter is required only when the load balancing algorithm is uch. You do not need to configure this parameter when the scheduling algorithm is wrr, sch, or wlc.
name: cafe-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
Set the annotation alb.ingress.kubernetes.io/backend-scheduler
based on the following description:
wrr
: Backend servers that have higher weights receive more requests than those that have lower weights. This is the default value.wlc
: Requests are distributed based on the weight and load of each backend server. The load refers to the number of connections to a backend server. If multiple backend servers have the same weight, requests are forwarded to the backend server with the least connections.sch
: consistent hashing that is based on source IP addresses.uch
: consistent hashing that is based on URL parameters. You can add the annotationalb.ingress.kubernetes.io/backend-scheduler-uch-value
to the ALB Ingress to specify URL parameters for consistent hashing when the load balancing algorithm for backend server groups isuch
.
Configure CORS
The following code block shows an example of the Cross-Origin Resource Sharing (CORS) configuration supported by the ALB Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
annotations:
alb.ingress.kubernetes.io/enable-cors: "true"
alb.ingress.kubernetes.io/cors-expose-headers: ""
alb.ingress.kubernetes.io/cors-allow-methods: "GET,POST"
alb.ingress.kubernetes.io/cors-allow-credentials: "true"
alb.ingress.kubernetes.io/cors-max-age: "600"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cloud-nodeport
port:
number: 80
Parameter | Description |
| The URLs that can be used to access resources on the origin server by using a browse. Separate multiple URLs with commas (,). Each URL must start with http:// or https:// and contain a valid domain name or a top-level wildcard domain name. Default value: |
| The HTTP methods that are allowed. The values are not case-sensitive. Separate multiple HTTP methods with commas (,). Default value: |
| The request headers that are allowed. The request headers can contain letters, digits, underscores (_), and hyphens (-). Separate multiple request headers with commas (,). Default value: |
| The headers that can be exposed. The headers can contain letters, digits, underscores (_), hyphens (-), and asterisks (*). Separate multiple headers with commas (,). Default value: |
| Specifies whether to include credentials in CORS requests. Default value: |
| The maximum period of time for which a preflight request that uses the OPTIONS method can be cached. Configure this parameter for complex requests. Valid values: -1 to 172800. Unit: seconds. Default value: |
Configure persistent connections
Traditional load balancers access backend servers over short-lived connections. A traditional load balancer needs to create a connection and close the connection each time the load balancer forwards a request to a backend server. Consequently, network connections become the performance bottleneck. To reduce the amount of resources used to establish network connections and improve forwarding performance, you can use the persistent TCP connection feature. You can add the alb.ingress.kubernetes.io/backend-keepalive
annotation to the ALB Ingress to enable the persistent TCP connection feature. Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
annotations:
alb.ingress.kubernetes.io/backend-keepalive: "true"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cloud-nodeport
port:
number: 80
Configure QPS throttling
ALB supports QPS throttling based on forwarding rules. You can limit the QPS to a range of 1 to 100000. You can add the annotation alb.ingress.kubernetes.io/traffic-limit-qps
to the ALB Ingress to enable the QPS throttling feature. Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/traffic-limit-qps: "50"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
- path: /coffee
pathType: ImplementationSpecific
backend:
service:
name: coffee-svc
port:
number: 80
Backend slow start
After a new pod is added to the Service backend, if the ALB Ingress immediately distributes traffic to the new pod, it may cause a sudden spike in CPU or memory usage, leading to access issues. In slow start mode, ALB Ingress gradually shifts traffic to the new pod to ease the impact of sudden traffic surges. The following sample code shows an example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/slow-start-enabled: "true"
alb.ingress.kubernetes.io/slow-start-duration: "100"
name: alb-ingress
spec:
ingressClassName: alb
rules:
- host: alb.ingress.alibaba.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Parameter | Description |
alb.ingress.kubernetes.io/slow-start-enabled | Specifies whether to enable the slow start mode.
By default, this mode is disabled. |
alb.ingress.kubernetes.io/slow-start-duration | The time taken for the slow start to gradually increase traffic. The longer the duration, the slower the increase in traffic. Unit: seconds. Valid values: 30 to 900. Default value: |
Connection draining
If a pod enters the Terminating state, the ALB Ingress will remove it from the backend. However, ongoing requests may still exist in the established connections, and errors may occur if the ALB Ingress immediately closes all connections. With connection draining enabled, ALB Ingress keeps connections open for a specified period after the pod is removed, ensuring a smooth shutdown once current requests are completed. The connection draining modes are:
Disabled: When a pod enters the Terminating state, the ALB Ingress removes the pod from the backend and immediately closes all connections.
Enabled: When a pod enters the Terminating state, the ALB Ingress maintains ongoing requests open but does not accept new ones:
If there are ongoing requests, the ALB Ingress closes all connections and removes the pod when the timeout is reached.
If the pod completes all requests before the timeout, the ALB Ingress immediately removes it.
Before the end of connection draining, ALB Ingress will not terminate its connections with the pod, but it cannot ensure the pod remains running. You can control the availability of the pod in the Terminating state by configuring spec.terminationGracePeriodSeconds
or using a preStop hook.
The following sample code shows an example to configure connection draining:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/connection-drain-enabled: "true"
alb.ingress.kubernetes.io/connection-drain-timeout: "199"
name: alb-ingress
spec:
ingressClassName: alb
rules:
- host: alb.ingress.alibaba.com
http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Parameter | Description |
alb.ingress.kubernetes.io/connection-drain-enabled | Specifies whether to enable connection draining.
By default, connection draining is disabled. |
alb.ingress.kubernetes.io/connection-drain-timeout | The timeout period of connection draining. Unit: seconds. Valid values: 0 to 900. Default value: |