All Products
Search
Document Center

Container Compute Service:Quick Start for ALB Ingress

Last Updated:Feb 10, 2026

Application Load Balancer (ALB) Ingress is built on ALB to provide powerful Ingress traffic management. It is compatible with Nginx Ingress, can handle complex business routing, and supports automatic certificate discovery. It supports the HTTP, HTTPS, and QUIC protocols. Using ALB Ingress in an Alibaba Cloud Container Service for Kubernetes (ACK) cluster provides the high elasticity and large-scale Layer 7 traffic processing required for cloud-native applications.

How it works

ALB Ingress involves the following basic concepts:

  • The ALB Ingress Controller is a component that manages Ingress resources. It uses the API Server to dynamically retrieve changes in Ingress and AlbConfig resources, and then updates the ALB instance. Unlike the Nginx Ingress Controller, the ALB Ingress Controller is the control plane for the ALB instance. It manages the ALB instance but does not handle user traffic directly. Instead, the ALB instance forwards user traffic. The ALB Ingress Controller uses the cluster's API Server to dynamically retrieve changes in Ingress resources and updates the ALB instance based on the forwarding rules defined in the Ingress.

  • AlbConfig (cluster-level CRD): An AlbConfig is a cluster-level Custom Resource Definition (CRD) created by the ALB Ingress Controller. The parameters in an AlbConfig define the configuration of the ALB instance. One AlbConfig corresponds to one ALB instance. The ALB instance is the entry point for user traffic and is responsible for forwarding user requests to backend Services. It is fully managed by Application Load Balancer (ALB).

  • IngressClass: An IngressClass defines the association between an Ingress and an AlbConfig.

  • Ingress: An Ingress is a resource object in Kubernetes that defines external traffic routing and access rules. The ALB Ingress Controller monitors changes in Ingress resources and updates the ALB instance to forward traffic.

  • Service: In Kubernetes, pods are temporary resources that change frequently. A service provides a stable and unified entry point for a group of pods that perform the same function. Other applications or services can communicate with the backend pods by accessing the virtual IP address and port of the service, without needing to be aware of changes to the individual pods.

The following figure shows the logical relationship between an ALB instance and an ALB Ingress.

image

Limitations

The names of AlbConfig, Namespace, Ingress, and Service resources cannot start with aliyun.

Earlier versions of the Nginx Ingress controller do not recognize the spec:ingressClassName field. If both Nginx Ingresses and ALB Ingresses are configured in your cluster, the ALB Ingresses may be reconciled by an earlier version of the Nginx Ingress controller. To avoid this issue, update the Nginx Ingress controller as soon as possible or use annotations to specify the IngressClasses of ALB Ingresses. For more information, see Upgrade the Nginx Ingress controller component or Advanced ALB Ingress usage.

Scenario example

This tutorial uses an Nginx deployment with four pods as an example to demonstrate how to configure ALB Ingress to forward traffic based on different URL paths under the same domain name.

Frontend request

Traffic is forwarded to

demo.domain.ingress.top/coffee

the coffee service

demo.domain.ingress.top/tea

the tea service

Prerequisites

Step 1: Deploy backend services

Console

  1. Log on to the ACS console. In the left navigation pane, click Clusters.

  2. On the Clusters page, click the name of the target cluster. In the left navigation pane, choose Workloads > Deployments.

  3. On the Deployments page, click Create from YAML in the upper-right corner.

  4. On the Create page, perform the following steps.

    1. Sample Template: Select Custom.

    2. Template: Enter the YAML configuration file code. This configuration file deploys two Deployments named coffee and tea, and two Services named coffee-svc and tea-svc.

      YAML configuration reference

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: coffee
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: coffee
        template:
          metadata:
            labels:
              app: coffee
          spec:
            containers:
            - name: coffee
              image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
              ports:
              - containerPort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: coffee-svc
      spec:
        ports:
        - port: 80
          targetPort: 80
          protocol: TCP
        selector:
          app: coffee
        type: ClusterIP
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: tea
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: tea
        template:
          metadata:
            labels:
              app: tea
          spec:
            containers:
            - name: tea
              image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
              ports:
              - containerPort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: tea-svc
      spec:
        ports:
        - port: 80
          targetPort: 80
          protocol: TCP
        selector:
          app: tea
        type: ClusterIP
  5. After the configuration is complete, click Create. A Created Successfully message appears.

  6. Verify that the deployments and services are created.

    1. In the navigation pane on the left, choose Workloads > Deployments. Verify that the deployments named coffee and tea are created.

    2. In the navigation pane on the left, choose Network > Services. Verify that the services named coffee-svc and tea-svc are created.

kubectl

  1. Create a file named cafe-service.yaml with the following content. This file is used to deploy two deployments named coffee and tea, and two services named coffee-svc and tea-svc.

    YAML configuration reference

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: coffee
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: coffee
      template:
        metadata:
          labels:
            app: coffee
        spec:
          containers:
          - name: coffee
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: coffee-svc
    spec:
      ports:
      - port: 80
        targetPort: 80
        protocol: TCP
      selector:
        app: coffee
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: tea
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: tea
      template:
        metadata:
          labels:
            app: tea
        spec:
          containers:
          - name: tea
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: tea-svc
    spec:
      ports:
      - port: 80
        targetPort: 80
        protocol: TCP
      selector:
        app: tea
      type: ClusterIP
  2. Run the following command to deploy the two deployments and two services.

    kubectl apply -f cafe-service.yaml

    Expected output:

    deployment "coffee" created
    service "coffee-svc" created
    deployment "tea" created
    service "tea-svc" created
  3. Run the following commands to check the status of the applications and services.

    1. Run the following command to check the status of the applications.

      kubectl get deployment

      Expected output:

      NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
      coffee                           2/2     2            2           2m 26s
      tea                              2/2     2            2           2m 26s
    2. Run the following command to check the status of the services.

      kubectl get svc

      Expected output:

      NAME                          TYPE            CLUSTER-IP       EXTERNAL-IP           PORT(S)           AGE
      coffee-svc                    ClusterIP       172.16.XX.XX     <none>                80/TCP            9m 38s
      tea-svc                       ClusterIP       172.16.XX.XX     <none>                80/TCP            9m 38s

Step 2: Create an ALBConfig

Console

  1. Log on to the ACS console. In the left navigation pane, click Clusters.

  2. On the Clusters page, click the name of the target cluster. In the left navigation pane, choose Workloads > Custom Resources.

  3. On the CRDs tab, click Create from YAML.

    1. Sample Template: Select Custom.

    2. Template: Enter the YAML configuration file code.

      YAML configuration reference

      apiVersion: alibabacloud.com/v1
      kind: AlbConfig
      metadata:
        name: alb-demo
      spec:
        config:
          name: alb-test
          addressType: Internet
          zoneMappings:                          # To ensure high availability, select at least two vSwitches in different zones.
          - vSwitchId: vsw-uf6ccg2a9g71hx8go**** # Replace with the actual vSwitch ID (in Zone 1).
          - vSwitchId: vsw-uf6nun9tql5t8nh15**** # Replace with the actual vSwitch ID (in Zone 2, which must be different from Zone 1).
        listeners:
          - port: 80
            protocol: HTTP

      The following table describes the adjustable parameters.

      Parameter

      Required

      Description

      metadata.name

      Yes

      The name of the AlbConfig.

      Note

      The name of the AlbConfig must be unique within the cluster. When you create an AlbConfig, ensure its name is unique to avoid naming conflicts.

      spec.config.name

      No

      The name of the ALB instance.

      spec.config.addressType

      No

      The network type of the ALB instance. Valid values:

      • Internet (default): The ALB instance is public-facing and accessible from the Internet.

        Note

        Application Load Balancer provides Internet-facing services by associating with an Elastic IP Address (EIP). If you use an Internet-facing ALB instance, you are charged for the EIP instance, bandwidth, and data transfer. For more information, see Pay-as-you-go.

      • Intranet: The ALB instance is private and accessible only within the VPC.

      spec.config.zoneMappings

      Yes

      Set the ALB vSwitch ID. For more information about how to create a vSwitch, see Create and manage vSwitches.

      Note
      • The specified vSwitch must be in a zone supported by ALB and in the same VPC as the cluster. For more information about the regions and zones supported by ALB, see Regions and zones that support ALB.

      • Application Load Balancer supports multi-zone deployment. If the current region supports two or more zones, select at least two vSwitches in different zones to ensure high availability.

      spec.listeners

      No

      Configure the listener port and protocol for the ALB instance. This example configures an HTTP listener on port 80.

      A listener defines how traffic enters the load balancer. If you do not keep this configuration, you must create a listener to use ALB Ingress.

  4. After the configuration is complete, click Create. A Created Successfully message appears.

  5. Verify that the ALB instance is created.

    1. Log on to the Application Load Balancer (ALB) console.

    2. In the top menu bar, select the region where the instance is located.

    3. On the Instances page, find the ALB instance named alb-test. This indicates that the instance was created successfully.

kubectl

  1. Copy the following content to a file named alb-test.yaml to create the ALBConfig.

    YAML configuration reference

    apiVersion: alibabacloud.com/v1
    kind: AlbConfig
    metadata:
      name: alb-demo
    spec:
      config:
        name: alb-test
        addressType: Internet
        zoneMappings:                          # To ensure high availability, select at least two vSwitches in different zones.
        - vSwitchId: vsw-uf6ccg2a9g71hx8go**** # Replace with the actual vSwitch ID (in Zone 1).
        - vSwitchId: vsw-uf6nun9tql5t8nh15**** # Replace with the actual vSwitch ID (in Zone 2, which must be different from Zone 1).
      listeners:
        - port: 80
          protocol: HTTP

    The following table describes the adjustable parameters.

    Parameter

    Required

    Description

    metadata.name

    Yes

    The name of the AlbConfig.

    Note

    The name of the AlbConfig must be unique within the cluster. When you create an AlbConfig, ensure its name is unique to avoid naming conflicts.

    spec.config.name

    No

    The name of the ALB instance.

    spec.config.addressType

    No

    The network type of the ALB instance. Valid values:

    • Internet (default): The ALB instance is public-facing and accessible from the Internet.

      Note

      Application Load Balancer provides Internet-facing services by associating with an Elastic IP Address (EIP). If you use an Internet-facing ALB instance, you are charged for the EIP instance, bandwidth, and data transfer. For more information, see Pay-as-you-go.

    • Intranet: The ALB instance is private and accessible only within the VPC.

    spec.config.zoneMappings

    Yes

    Set the ALB vSwitch ID. For more information about how to create a vSwitch, see Create and manage vSwitches.

    Note
    • The specified vSwitch must be in a zone supported by ALB and in the same VPC as the cluster. For more information about the regions and zones supported by ALB, see Regions and zones that support ALB.

    • Application Load Balancer supports multi-zone deployment. If the current region supports two or more zones, select at least two vSwitches in different zones to ensure high availability.

    spec.listeners

    No

    Configure the listener port and protocol for the ALB instance. This example configures an HTTP listener on port 80.

    A listener defines how traffic enters the load balancer. If you do not keep this configuration, you must create a listener to use ALB Ingress.

  2. Run the following command to create the ALBConfig.

    kubectl apply -f alb-test.yaml

    Expected output:

    albconfig.alibabacloud.com/alb-demo created

    The output shows that the ALBConfig was created successfully.

Step 3: Create an IngressClass

Each IngressClass must correspond to one AlbConfig.

Console

  1. Log on to the ACS console. In the left navigation pane, click Clusters.

  2. On the Clusters page, click the name of the target cluster. In the left navigation pane, choose Workloads > Custom Resources.

  3. On the CRDs tab, click Create from YAML.

    1. Sample Template: Select Custom.

    2. Template: Enter the YAML configuration file code.

      YAML configuration reference

      apiVersion: networking.k8s.io/v1
      kind: IngressClass
      metadata:
        name: alb
      spec:
        controller: ingress.k8s.alibabacloud/alb
        parameters:
          apiGroup: alibabacloud.com
          kind: AlbConfig
          name: alb-demo

      The following table describes the adjustable parameters.

      Parameter

      Required

      Description

      metadata.name

      Yes

      The name of the IngressClass.

      Note

      The name of the IngressClass must be unique within the cluster. When you create an IngressClass, ensure its name is unique to avoid naming conflicts.

      spec.parameters.name

      Yes

      The name of the associated AlbConfig.

  4. After the configuration is complete, click Create. The page displays a message indicating that the creation is successful.

  5. Verify that the IngressClass is created.

    1. In the navigation pane on the left, choose Workloads > Custom Resources.

    2. Click the Resource Object Browser tab.

    3. In the API Group search bar, enter IngressClass and search. Verify that the corresponding IngressClass has been created.

kubectl

  1. Create a file named alb.yaml with the following content to create the IngressClass.

    YAML configuration reference

    apiVersion: networking.k8s.io/v1
    kind: IngressClass
    metadata:
      name: alb
    spec:
      controller: ingress.k8s.alibabacloud/alb
      parameters:
        apiGroup: alibabacloud.com
        kind: AlbConfig
        name: alb-demo

    The following table describes the adjustable parameters.

    Parameter

    Required

    Description

    metadata.name

    Yes

    The name of the IngressClass.

    Note

    The name of the IngressClass must be unique within the cluster. When you create an IngressClass, ensure its name is unique to avoid naming conflicts.

    spec.parameters.name

    Yes

    The name of the associated AlbConfig.

  2. Run the following command to create the IngressClass.

    kubectl apply -f alb.yaml

    Expected output:

    ingressclass.networking.k8s.io/alb created

Step 4: Create an Ingress

Console

  1. Log on to the ACS console. In the left navigation pane, click Clusters.

  2. On the Clusters page, click the name of the target cluster. In the left navigation pane, choose Network > Ingresses.

  3. On the Ingresses page, click Create Ingress, and in the Create Ingress dialog box, configure routing.

    Configuration Item

    Description

    Example Value

    Gateway Type

    You can select ALB or MSE as the application load balancing gateway type.

    ALB Ingress

    Name

    The custom name of the Ingress.

    cafe-ingress

    Ingress Class

    The custom class of the Ingress.

    alb

    Rules

    Click + Add Rule to add multiple routing rules.

    • Domain Name: The custom domain name.

    • Path Mapping: Configure the following items.

      • Path: The URL path for accessing the service. In this example, this is not configured, and the root path / is used.

      • Matching Rule: Supports Prefix, Exact, and Default (ImplementationSpecific).

      • Service Name: The target service, which is a Service in Kubernetes.

      • Port: The port that the service needs to expose.

    • An Ingress supports multiple paths under the same domain name. Click + Add Path to add a path.

    • Domain Name: demo.domain.ingress.top

    • Path Mapping:

      • Path: /tea

      • Match Type: ImplementationSpecific

      • Service Name: tea-svc

      • Port: 80

    • Path Mapping:

      • Path: /coffee

      • Match Type: ImplementationSpecific

      • Service Name: coffee-svc

      • Port: 80

    TLS Configuration

    Enable TLS configuration to configure a secure routing service.

    • Domain Name: A custom domain name.

    • Secret: Select the corresponding secret as needed.

      To create a secret, perform the following steps.

      1. To the right of Secret, click Create.

      2. In the Create Secret dialog box, enter a custom Name, Cert, and Key for the secret, then click OK.

      3. From the Secret drop-down list, select the created secret.

    Click + Add TLS Configuration to configure multiple TLS settings.

    For more information, see Configure an HTTPS certificate for encrypted communication.

    Disable TLS Configuration; TLS is not required for this example.

    More Configurations

    • Phased Release: Turn on the phased release switch. You can set phased release rules based on request headers, cookies, and weights.

      Note

      You can set only one of the request header, cookie, or weight rules. If multiple rules are set at the same time, they are matched in the order of request header, cookie, and weight.

      • By request header: Splits traffic based on the request header. After this is set, the alb.ingress.kubernetes.io/canary-by-header and alb.ingress.kubernetes.io/canary-by-header-value annotations are added.

      • By Cookie: Splits traffic based on the cookie. After this is set, the alb.ingress.kubernetes.io/canary-by-cookie annotation is added.

      • By weight: Sets the percentage of requests to a specified service. The value must be an integer from 0 to 100. After this is set, the alb.ingress.kubernetes.io/canary-weight annotation is added.

    • Protocol: Supports backend services that use the HTTPS and gRPC protocols. After this is set, the alb.ingress.kubernetes.io/backend-protocol annotation is added.

    • Rewrite Path: The path in a client request is rewritten before the request is sent to the backend service. After this is set, the alb.ingress.kubernetes.io/rewrite-target annotation is added.

    Disable phased release, and keep the default protocol and rewrite path; none of these are required for this example.

    Custom Forwarding Rules

    Enable custom forwarding rules for fine-grained management of inbound traffic.

    Note

    A forwarding rule can have a maximum of 10 condition entries.

    • From the Forwarding Conditions drop-down list, select one of the following:

      • Domain Name:

        Matches the request domain name. If multiple domain names are set, they are joined by a logical OR. When set, the annotation alb.ingress.kubernetes.io/conditions.host-example is added.

      • Path:

        Matches the request path. If multiple paths are set, they are joined by a logical OR. When set, the annotation alb.ingress.kubernetes.io/conditions.path-example is added.

      • HTTP Header:

        Matches the request header as a key-value pair. For example, Key Is: headername, Value Is: headervalue1. If multiple header values are set, they are joined by a logical OR. When set, the annotation alb.ingress.kubernetes.io/conditions.http-header-example is added.

    • From the Forwarding Actions drop-down list, select one of the following:

      • Forward To

        Forward to multiple backend server groups. For Service Name, select the target service. For Port, select the target port number. Then, configure a custom weight value.

        Note

        If you select Forward to, you do not need to configure Path Mapping in the rule.

      • Return Fixed Response

        Set a fixed response to be returned to the client by ALB. You can set the response status code, body content, and content type. Configure Response Status Code, Response Body Type (Optional), and Response Body (Optional) as needed.

        Response Body Type:

        • text/plain: Plain text content type.

        • text/css: CSS content.

        • text/html: HTML content.

        • application/javascript: JavaScript content.

        • application/json: JSON content type.

    Custom forwarding rules support various forwarding conditions and actions. Configure forwarding conditions based on domain name, path, and HTTP header, and forwarding actions such as forwarding to a service or returning a fixed response. See Customize forwarding rules for ALB Ingress.

    Disable custom forwarding rules; they are not required for this example.

    Annotations

    You can specify a custom annotation name and value, or select or search for an annotation to configure. For more information about Ingress annotations, see Annotations.

    No configuration needed; annotations are not required for this example.

    Labels

    Labels are used to add corresponding tags to an Ingress to indicate its characteristics.

    No configuration needed; labels are not required for this example.

  4. After the configuration is complete, click OK at the bottom left of the Create Ingress dialog box.

  5. Verify that the Ingress is created.

    1. In the navigation pane on the left, choose Network > Ingresses. Verify that the Ingress named cafe-ingress is deployed.

    2. In the Endpoints column for cafe-ingress, view the endpoint information.

kubectl

  1. Create a file named `cafe-ingress.yaml` with the following content to create the Ingress.

    YAML configuration reference

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: cafe-ingress 
    spec:
      ingressClassName: alb
      rules:
       - host: demo.domain.ingress.top
         http:
          paths:
          # Configure Context Path
          - path: /tea
            pathType: ImplementationSpecific
            backend:
              service:
                name: tea-svc
                port:
                  number: 80
          # Configure Context Path
          - path: /coffee
            pathType: ImplementationSpecific
            backend:
              service:
                name: coffee-svc
                port: 
                  number: 80

    The following table describes the adjustable parameters.

    Parameter

    Required

    Description

    metadata.name

    Yes

    The name of the Ingress.

    Note

    The name of the Ingress must be unique within the cluster. When you create an Ingress, ensure its name is unique to avoid naming conflicts.

    spec.ingressClassName

    Yes

    The name of the associated IngressClass.

    spec.rules.host

    No

    The domain name from the `Host` field of the HTTP header. Set this to your custom domain name.

    When you access a custom domain name in a browser, such as "http://demo.domain.ingress.top", the browser automatically adds a "Host: demo.domain.ingress.top" header to the HTTP request. This allows the server to identify the target hostname from the header. In Kubernetes, the `host` field in an Ingress rule is matched against the `Host` header from the request. When a match is found, the Ingress rule routes the request to the corresponding backend service.

    Note
    • If you configure a custom domain name here, ensure that an ICP filing is obtained for the domain name. Otherwise, the domain name may fail to resolve. For more information, see ICP filing process.

    • If this is not configured, the Ingress rule matches all requests that reach the Ingress Controller.

    spec.rules.http.paths.path

    Yes

    The forwarding path URL.

    spec.rules.http.paths.pathType

    Yes

    The URL matching rule. For more information, see Forward requests based on URL paths.

    spec.rules.http.paths.backend.service.name

    Yes

    Enter the name of the Service you created earlier.

    spec.rules.http.paths.backend.service.port.number

    Yes

    Enter the service port number of the Service you created earlier.

    This port number setting is important because it determines the port used when routing to the backend service. Ensure the port number is set correctly so that requests can be properly routed to and processed by the backend service.

  2. Run the following command to configure the domain name and path that expose the coffee and tea services.

    kubectl apply -f cafe-ingress.yaml

    Expected output:

    ingress.networking.k8s.io/cafe-ingress created
  3. (Optional) Run the following command to retrieve the DNS address of the ALB instance.

    kubectl get ingress

    Expected output:

    NAME           CLASS    HOSTS                         ADDRESS                                               PORTS   AGE
    cafe-ingress   alb      demo.domain.ingress.top       alb-m551oo2zn63yov****.cn-hangzhou.alb.aliyuncs.com   80      50s

(Optional) Step 5: Configure domain name resolution

If you set the spec.rules.host field to a custom domain name when creating the Ingress, add a CNAME record to resolve the domain name to the ALB DNS name. Then access the service through your custom domain name.

  1. Log on to the Container Service for Kubernetes (ACK) console.

  2. Click the name of the cluster to open the cluster management page.

  3. In the navigation pane on the left, choose Network > Ingresses.

  4. In the Endpoints column for cafe-ingress, copy its corresponding DNS name.

  5. Perform the following steps to add a CNAME record.

    1. Log on to the Alibaba Cloud DNS console.

    2. On the Domain Names page, click Add Domain Name.

    3. In the Add Domain Name dialog box, enter the host domain name, and then click OK.

      Important

      The host domain name must have passed TXT record verification.

    4. In the Actions column of the target domain name, click Configure.

    5. On the Configure page, click Add Record.

    6. In the Add Record panel, configure the following information to complete the CNAME configuration, then click OK.

      Configuration

      Description

      Type

      From the drop-down list, select CNAME.

      Host

      The prefix of the domain name, such as www.

      Resolution Request Source

      Select Default.

      Value

      Enter the CNAME address corresponding to the domain name, which is the DNS name you copied in the previous step.

      TTL

      Time to Live (TTL) is the cache time for the DNS record on the DNS server. This topic uses the default value.

Step 6: Test traffic forwarding

In a browser, enter the test domain name and URL path to test whether traffic is forwarded correctly.

Note
  • If you configured a custom domain name, the test domain name is your custom domain name.

  • If you did not configure a custom domain name, the test domain name is the endpoint DNS name of cafe-ingress.

This example uses demo.domain.ingress.top as the test domain name.

  1. In a browser, enter demo.domain.ingress.top/coffee. The backend service interface corresponding to coffee-svc is returned. image

  2. In a browser, enter demo.domain.ingress.top/tea. The backend service interface corresponding to tea-svc is returned.image

References