All Products
Search
Document Center

Container Service for Kubernetes:NGINX Ingress FAQs

Last Updated:Nov 19, 2024

This topic provides answers to some frequently asked questions about Ingresses.

Which SSL or TLS protocol versions are supported by Ingresses?

Ingress-nginx supports Transport Layer Security (TLS) 1.2 and TLS 1.3. If the TLS protocol version that is used by a browser or mobile client is earlier than 1.2, errors may occur during handshakes between the client and ingress-nginx.

If you want ingress-nginx to support more TLS protocol versions, add the following configurations to kube-system/nginx-configuration configmap. For more information, see TLS/HTTPS.

Note

If you want to enable TLS 1.0 or 1.1 for NGINX Ingress controller 1.7.0 and later, you must specify @SECLEVEL=0 in the ssl-ciphers parameter.

image.png

ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"
ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"

Do Ingresses pass Layer 7 request headers to backend servers by default?

By default, ingress-nginx passes Layer 7 request headers to backend servers. However, request headers that do not conform to HTTP rules are filtered out before requests are forwarded to the backend servers. For example, the Mobile Version request header is filtered out. If you do not want to filter out these request headers, run the kubectl edit cm -n kube-system nginx-configuration command to add the relevant configurations to the nginx-configuration ConfigMap. For more information, see ConfigMap.

enable-underscores-in-headers: true

Can ingress-nginx forward requests to backend HTTPS servers?

To enable ingress-nginx to forward requests to backend HTTPS servers, run the following command to add the required annotations to the Ingress configuration:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: xxxx
  annotations:
    # You must specify HTTPS as the protocol that is used by the backend server. 
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

Do Ingresses pass client IP addresses at Layer 7?

By default, ingress-nginx adds the X-Forward-For and X-Real-IP header fields to pass client IP addresses. However, if the X-Forward-For and X-Real-IP header fields are already added to a request by a client, the backend server cannot get the client IP address.

Run the command kubectl edit cm -n kube-system nginx-configuration to add the configurations in ConfigMap. This allows ingress-nginx to pass client IP addresses at Layer 7.

compute-full-forwarded-for: "true"
forwarded-for-header: "X-Forwarded-For"
use-forwarded-headers: "true"

If multiple proxy servers exist before reaching the NGINX ingress, you must adjust the configurations according to proxy-real-ip-cidr. Add the CIDR blocks of these proxy servers in the proxy-real-ip-cidr field. Separate multiple CIDR blocks with commas (,). For more information, see Use WAF or transparent WAF.

proxy-real-ip-cidr:  "0.0.0.0/0,::/0"  

On IPv6 networks, if the NGINX Ingress receives an empty X-Forwarded-For header, and there is a Classic Load Balancer (CLB) instance before reaching the NGINX Ingress, you can enable the Proxy protocol on the CLB to retrieve the client IP address. For more information about the Proxy protocol, see Enable Layer 4 listeners to preserve client IP addresses.

Does the NGINX Ingress controller support HSTS?

By default, HTTP Strict Transport Security (HSTS) is enabled for nginx-ingress-controller. When a browser sends a plain HTTP request for the first time, the response header from the backend server (with HSTS enabled) contains Non-Authoritative-Reason: HSTS. This indicates that the backend server supports HSTS. If the client also supports HSTS, the client continues to send HTTPS requests if the first access attempt succeeds. The body of the response from the backend server contains the 307 Internal Redirect status code, as shown in the following figure.1

If you do not want to forward the client requests to backend HTTPS servers, disable HSTS for nginx-ingress-controller. For more information, see HSTS.

Note

By default, the HSTS configuration is cached by browsers. You must manually delete the browser cache after you disable HSTS for nginx-ingress-controller.

Which rewrite rules are supported by ingress-nginx?

Only simple rewrite rules are supported by ingress-nginx. For more information, see Rewrite. If you want to configure complex rewrite rules, use the following methods:

  • configuration-snippet: Add this annotation to the location configuration of an Ingress. For more information, see Configuration snippet.

  • server-snippet: Add this annotation to the server configuration of an Ingress. For more information, see Server snippet.

You can use other snippets to add global configurations, as shown in the following figure. For more information, see main-snippet.2

What gets updated in the system after I update the NGINX Ingress controller on the Add-ons page of the ACK console?

If the version of the NGINX Ingress controller is earlier than 0.44, the component includes the following resources:

  • serviceaccount/ingress-nginx

  • configmap/nginx-configuration

  • configmap/tcp-services

  • configmap/udp-services

  • clusterrole.rbac.authorization.k8s.io/ingress-nginx

  • clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx

  • role.rbac.authorization.k8s.io/ingress-nginx

  • rolebinding.rbac.authorization.k8s.io/ingress-nginx

  • service/nginx-ingress-lb

  • deployment.apps/nginx-ingress-controller

If the version of the NGINX Ingress controller is 0.44 or later, the component includes the following resources in addition to the preceding resources:

  • validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission

  • service/ingress-nginx-controller-admission

  • serviceaccount/ingress-nginx-admission

  • clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission

  • clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission

  • role.rbac.authorization.k8s.io/ingress-nginx-admission

  • rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission

  • job.batch/ingress-nginx-admission-create

  • job.batch/ingress-nginx-admission-patch

When you update the NGINX Ingress controller on the Add-ons page of the Container Service for Kubernetes (ACK) console, the configurations of the following resources remain unchanged:

  • configmap/nginx-configuration

  • configmap/tcp-services

  • configmap/udp-services

  • service/nginx-ingress-lb

The configurations of other resources are reset to default values. For example, the default value of the replicas parameter of the deployment.apps/nginx-ingress-controller resource is 2. If you set the value of replicas to 5 before you update the NGINX Ingress controller, the replicas parameter uses the default value 2 after you update the component from the Add-ons page of the ACK console.

How do I change Layer 4 listeners to Layer 7 HTTP or HTTPS listeners for ingress-nginx?

By default, the Server Load Balancer (SLB) instance of the ingress-nginx pod listens on TCP ports 443 and 80. You can change Layer 4 listeners to Layer 7 listeners by changing the protocol of the listeners to HTTP or HTTPS.

Note

Your service will be temporarily interrupted when the system changes the listeners. We recommend that you perform this operation during off-peak hours.

  1. Create a certificate and record the certificate ID (cert-id). For more information, see Use a certificate from Certificate Management Service.

  2. Change the listeners of the SLB instance used by the Ingress from Layer 4 to Layer 7 by using annotations.

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Network > Services.

    3. In the upper part of the Services page, set Namespace to kube-system. Find ingress-nginx-lb and click Edit YAML in the Actions column.

    4. In the View in YAML panel, set targetPort to 80 for port 443 in the ports section.

        - name: https
          port: 443
          protocol: TCP
          targetPort: 80 # Set targetPort to 80 for port 443.

      Add the following configurations to the annotations parameter and then click Update.

      service.beta.kubernetes.io/alibaba-cloud-loadbalancer-protocol-port: "http:80,https:443"
      service.beta.kubernetes.io/alibaba-cloud-loadbalancer-cert-id: "${YOUR_CERT_ID}"
  3. Verify the result.

    1. On the Services page, find the ingress-nginx-lb Service and click the image icon in the Type column.

    2. Click the Listener tab. If HTTP:80 and HTTPS:443 are displayed in the Frontend Protocol/Port column, the listeners of the SLB instance are changed from Layer 4 to Layer 7.

How do I specify an existing SLB instance for ack-ingress-nginx deployed from the Marketplace page of the ACK console?

  1. Log on to the ACK console. In the left-side navigation pane, choose Marketplace > Marketplace.

  2. On the App Catalog tab, select ack-ingress-nginx or ack-ingress-nginx-v1.

    • If your cluster runs Kubernetes 1.20 or earlier, select ack-ingress-nginx.

    • If your cluster runs a Kubernetes version later than 1.20, select ack-ingress-nginx-v1.

  3. Deploy an Ingress controller. For more information, see Deploy multiple Ingress controllers in a cluster.

    On the Parameters wizard page, delete the original annotations and then add new annotations.

    1. Delete all annotations in the controller.service.annotations section.

      image..png

    2. Add new annotations.

      Specify the SLB instance that you want to use.
      service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "${YOUR_LOADBALANCER_ID}"
      # Overwrite the listeners of the SLB instance.
      service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"

      image..png

  4. Click OK to deploy the Ingress controller.

  5. After the Ingress controller is deployed, configure an Ingress class for the Ingress controller. For more information, see Deploy multiple Ingress controllers in a cluster.

How do I collect access logs from multiple Ingress controllers?

Prerequisites

Procedure:

  1. Log on to the ACK console and click Clusters in the left-side navigation pane.

  2. On the Clusters page, click the name of the cluster that you want to manage and then click Cluster Information in the left-side navigation pane.

  3. On the Cluster Information page, click the Cluster Resources tab. On the Cluster Resources tab, click the ID on the right side of Log Service Project.

    image

  4. On the Logstores page of the ACK console, create a Logstore. For more information, see Manage a Logstore. To avoid repeated log collection, we recommend that you create a Logstore for each Ingress controller.

    • You can name a Logstore based on the name of the Ingress controller that uses the Logstore.

    • In the Data Collection Wizard message, click Cancel.

  5. In the left-side navigation pane of the Logstores page, choose nginx-ingress > Logtail Configurations. In the Logtail Configuration column, click k8s-nginx-ingress to go to the configuration page.

  6. On the Logtail Configuration page, click Copy. On the Logtail Replication page, select the Logstore that you created from the drop-down list. In the Container Filtering section, click Add in the Container Label Whitelist column and add the labels of Ingress controllers as key-value pairs. On the Logtail Replication page, click Submit.

    image

  7. In the left-side navigation pane of the Logstores page, click the Logstore that you created. In the upper-right corner of the page, click Enable. Then, click OK in the Search & Analysis panel. The Logstore configuration is completed.

  8. In the left-side navigation pane of the Logstores page, choose the Logstore that you created > Logtail Configurations. On the Logtail Configuration page, click Manage Logtail Configuration in the Actions column. On the Configuration Details tab, click Exact Field (Regex Mode) in the Processor Name column to view the extracted log fields.

    8184e5bd055d3c2d8add99ce8a285eb0

  9. On the Logtail Configuration page, click Switch to Editor Configuration. Click Edit below Logtail Configuration. In the Plug-in Configuration section, configure the Keys and Regex parameters based on your requirements. Then, click Save.

    Note

    If different NGINX Ingress controllers use different log formats, you need to modify the processors parameter in the Logtail configuration in different Logstores accordingly.

How do I enable TCP listeners for nginx ingress controller?

By default, Ingresses forward only external HTTP and HTTPS requests to Services in the cluster. You can configure ingress-nginx to enable an Ingress to forward external TCP requests received on the TCP port specified in the relevant ConfigMap.

Procedure:

  1. Use the tcp-echo template to deploy a Service and a Deployment.

  2. Use the following template to create a ConfigMap.

    1. Modify the tcp-services-cm.yaml file. Then, save the changes and exit.

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: tcp-services
        namespace: kube-system 
      data:
        9000: "default/tcp-echo:9000" # This configuration indicates that external TCP requests received on port 9000 are forwarded to the tcp-echo Service in the default namespace. 
        9001:"default/tcp-echo:9001" 
    2. Run the following command to create the ConfigMap:

      kubectl apply -f tcp-services-cm.yaml
  3. Open TCP ports for the Service used by nginx-ingress-controller. Then, save the changes and exit.

    kubectl edit svc nginx-ingress-lb -n kube-system 
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: nginx-ingress-lb
      name: nginx-ingress-lb
      namespace: kube-system
    spec:
      allocateLoadBalancerNodePorts: true
      clusterIP: 192.168.xx.xx
      ipFamilies:
      - IPv4
      ports:
      - name: http
        nodePort: 30xxx
        port: 80
        protocol: TCP
        targetPort: 80
      - name: https
        nodePort: 30xxx
        port: 443
        protocol: TCP
        targetPort: 443
      - name: tcp-echo-9000       # The port name.
        port: 9000                # The port number.
        protocol: TCP             # The protocol.
        targetPort: 9000          # The destination port.
      - name: tcp-echo-9001       # The port name.
        port: 9001                # The port number.
        protocol: TCP             # The protocol.
        targetPort: 9001
    selector:
        app: ingress-nginx
      sessionAffinity: None
      type: LoadBalancer
  4. Check whether the configuration takes effect.

    1. Run the following command to query information about the Ingress. You can obtain the IP address of the SLB instance associated with the Ingress.

       kubectl get svc -n kube-system| grep nginx-ingress-lb 

      Expected output:

      nginx-ingress-lb      LoadBalancer   192.168.xx.xx  172.16.xx.xx   80:31246/TCP,443:30298/TCP,9000:32545/TCP,9001:31069/TCP   
    2. Run the nc command to send helloworld to the IP address corresponding to port 9000. If no response is returned, the configuration takes effect.

      echo "helloworld" |  nc <172.16.xx.xx> 9000
      
      echo "helloworld" |  nc <172.16.xx.xx> 9001

What is the match logic of certificates configured for NGINX Ingresses?

An Ingress uses the spec.tls parameter to specify TLS configurations and the spec.rules.host parameter to specify the domain name of the Ingress. The NGINX Ingress controller uses Lua tables to store the mappings between domain names and certificates.

When a client sends an HTTPS request to NGINX, the request carries a Server Name Indication (SNI) field that specifies the host to which the request is sent. The NGINX Ingress uses the certificate.call() method to check whether a certificate is associated with the domain name. If no certificate is found, a fake certificate is returned.

Sample NGINX configurations:


    ## start server _
    server {
        server_name _ ;
        listen 80 default_server reuseport backlog=65535 ;
        listen [::]:80 default_server reuseport backlog=65535 ;
        listen 443 default_server reuseport backlog=65535 ssl http2 ;
        listen [::]:443 default_server reuseport backlog=65535 ssl http2 ;
        set $proxy_upstream_name "-";
        ssl_reject_handshake off;
        ssl_certificate_by_lua_block {
            certificate.call()
        }
   ...
   }
    
    
    ## start server www.example.com
    server {
        server_name www.example.com ;
        listen 80  ;
        listen [::]:80  ;
        listen 443  ssl http2 ;
        listen [::]:443  ssl http2 ;
        set $proxy_upstream_name "-";
        ssl_certificate_by_lua_block {
            certificate.call()
        }
    ...
    }

ingress-nginx supports the Online Certificate Status Protocol (OCSP) stapling feature, which is used to check the certificate status. With this feature enabled, clients are not required to verify the certificate status with certificate authorities (CAs). This speeds up certificate validation and accelerates the access to NGINX. For more information, see Configure OCSP stapling.

What do I do if no certificate matches an NGINX Ingress?

Find the Secret that stores the certificate and save the Base64-decoded certificate content to a new file. Then, run openssl commands to decrypt the certificate.

kubectl get secret  <YOUR-SECRET-NAME>  -n <SECRET-NAMESPACE>  -o jsonpath={.data."tls\.crt"} |base64 -d  | openssl x509  -text -noout

Check whether the domain name that you accessed is included in the Common Name (CN) field. If the domain name is not included in the CN field, you need to create a new certificate.

What do I do if NGINX pods fail health checks in heavy load scenarios?

Health checks are performed by sending requests to the /healthz path through port 10246 of NGINX.

The following messages are returned when NGINX fails health checks:

I0412 11:01:52.581960       7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:01:55 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:01:55.895683       7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:02.582247       7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:02:05 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:02:05.896126       7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:12.582687       7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:02:15 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:02:15.895719       7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:22.582516       7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:02:25 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:02:25.896955       7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:28.983016       7 nginx.go:408] "NGINX process has stopped"
I0412 11:02:28.983033       7 sigterm.go:44] Handled quit, delaying controller exit for 10 seconds
I0412 11:02:32.582587       7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:02:35 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:02:35.895853       7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:38.986048       7 sigterm.go:47] "Exiting" code=0

In excessive load scenarios, the CPU usage of NGINX processes spikes and may even approach 100%. In this case, NGINX fails health checks. To resolve this issue, we recommend that you scale out NGINX pods to spread the pods to different nodes.

What do I do if certificates fail to be issued due to cert-manager errors?

This issue may occur when Web Application Firewall (WAF) is enabled. WAF may interfere with HTTP01 requests, which interrupts certificate issuance. To resolve this issue, we recommend that you disable WAF. Before you disable WAF, evaluate the impact after WAF is disabled.

How do I handle NGINX memory usage spikes during peak hours?

If NGINX encounters memory usage spikes and out of memory (OOM) errors during peak hours, log on to the NGINX pod and identify the processes that consume excessive memory. In most cases, metric collection processes lead to memory leaks. This issue is discovered in NGINX Ingress controller 1.6.4. We recommend that you update the NGINX Ingress controller to the latest version and disable metric collection by adding --enable-metrics=false to the startup parameters. We recommend that you disable collection for metrics that significantly increase memory usage, such as nginx_ingress_controller_ingress_upstream_latency_seconds. For more information, see Ingress controller stress test, Prometheus metric collector memory leak, and Metrics PR.

Fix the NGINX Ingress controller upgrade issue

During the Nginx Ingress controller canary upgrade process, if the process is stuck at the verification and cannot proceed with a message saying "Operation is forbidden for task in failed state", in most cases, it is because the component upgrade task was cleared by the system after exceeding the 4-day validity period. You need to manually adjust the canary status of the component.

If the upgrade process of the component reaches the release phase, skip the following steps. You can wait until the current task automatically stops due to exceeding the 4-day validity period.

Procedure:

After you modify the parameters, the component upgrade resumes automatically, replacing the old pods to finish the canary upgrade. However, the status still shows "in progress" in the Add-ons page in the ACK console, and returns back to normal after approximately two weeks.

  1. Run the following command to modify the Deployment of nginx-ingress-controller:

    kubectl edit deploy -n kube-system  nginx-ingress-controller
  2. Set the following parameters to the specified values:

    • spec.minReadySeconds: 0

    • spec.progressDeadlineSeconds: 600

    • spec.strategy.rollingUpdate.maxSurge: 25%

    • spec.strategy.rollingUpdate.maxUnavailable: 25%

    image

  3. Save the file after the configuration.

Why does chunked transfer encoding stop working since controller v1.10?

If your code configures the HTTP request header Transfer-Encoding: chunked and the controller logs show errors related to duplicate headers, this may be because of the updates in NGINX. For more information, see the NGINX update log. Since v1.10, NGINX enhanced the validation of HTTP responses. As a result, backend responses containing multiple Transfer-Encoding: chunked headers are considered invalid. You must ensure that your backend service returns only one Transfer-Encoding: chunked header. For details, see GitHub Issue #11162.

How do I configure access control based on IP address whitelists and blacklists in NGINX Ingress?

You can configure a blacklist or whitelist for the NGINX Ingress to reject or allow requests from a specific IP address by adding key-value pairs in the ConfigMap or using annotations in Ingress routing rules. The ConfigMap applies globally, while Ingress settings take effect at the routing level. Ingress routing configurations have higher priority than global settings. For information about how to add annotations in Ingresses, see the table below. For more details, see Denylist source range and Whitelist source range.

Annotation

Description

nginx.ingress.kubernetes.io/denylist-source-range

Specifies the IP address blacklist of a specific route. IP addresses and CIDR blocks are supported. Separate IP addresses or CIDR blocks with commas (,).

nginx.ingress.kubernetes.io/whitelist-source-range

Specifies the IP address whitelist of a specific route. IP addresses and CIDR blocks are supported. Separate IP addresses or CIDR blocks with commas (,).

Known issues in NGINX Ingress v1.2.1

When a defaultBackend field in the Ingress, it may override the defaultBackend settings of the default server block in NGINX. For more information, see GitHub Issue #8823.

To resolve this issue, we recommend that you update the NGINX Ingress controller to v1.3 or later. For more information about how to update, see Update the NGINX Ingress controller.

How can I handle connection resets when accessing Internet services with the curl command?

When using the curl command to access Internet services outside the Chinese mainland through the HTTP protocol, you may encounter the error message curl: (56) Recv failure: Connection reset by peer. Normally, this is because the HTTP plaintext request may contain sensitive keywords, causing the request to be blocked or the response to be reset. You can configure a TLS certificate for Ingress rules to ensure that communication is encrypted. For more information about how to configure a TLS certificate, see Advanced NGINX Ingress configurations.