All Products
Search
Document Center

Alibaba Cloud Service Mesh:Use ASM cross-cluster mesh proxy to implement cross-network communication among multiple clusters

Last Updated:Nov 14, 2024

Service Mesh (ASM) allows you to integrate multiple Container Service for Kubernetes (ACK) clusters into one ASM instance to provide a centralized management and O&M platform for sparsely distributed services. The ASM cross-cluster mesh proxies provide a more flexible interconnection solution for multi-cluster networks. This topic describes how to use ASM cross-cluster mesh proxies to configure cross-network communication for multiple clusters.

Background information

ASM supports the multi-cluster mode. This indicates that you can add multiple ACK clusters to the same ASM instance. The multi-cluster mode of ASM allows you to add multiple ACK clusters that reside in different networks to the same ASM instance. If cross-network communication cannot be established at Layer 3 for these clusters due to some reasons, such as facility restrictions, CIDR block conflicts, and fees, you can use ASM cross-cluster mesh proxies to connect the clusters. The ASM cross-cluster mesh proxies allow you to flexibly connect these clusters by using public and private networks. This way, CIDR block conflicts can be solved without modifying your service code, and centralized traffic governance, security protection, and end-to-end observation for multiple clusters are implemented. This topic describes how to use ASM cross-cluster mesh proxies to configure cross-network communication for multiple clusters added to the same ASM instance. In this example, the sleep application is used to access the HTTPBin application across clusters.

image

Benefits

The ASM cross-cluster mesh proxies provided by ASM instances of V1.22 and later completely implement Layer 7 load balancing. The routing capabilities of east-west ASM gateways in cross-cluster communication scenarios are the same as those in non-cross-cluster communication scenarios.

Prerequisites

  • An ASM instance whose version is 1.22 or later is created. For more information, see Create an ASM instance.

  • Multiple clusters are added to the ASM instance. For more information, see Add a cluster to an ASM instance. (In this example, two clusters are added.)

  • Automatic sidecar proxy injection is enabled for the ASM instance. For more information, see the "Enable automatic sidecar proxy injection" section of the Manage global namespaces topic.

  • Cross-cluster access between services is available only if one of the following two conditions is met:

    • The Domain Name System (DNS) proxy feature is enabled in the ASM instance. For more information, see Use the DNS proxy feature in an ASM instance. This method is recommended.

    • A destination service that is the same as the one in the cluster serving the server is manually created in the cluster serving the client.

Step 1: Associate an elastic IP address (EIP) to the control plane of the ASM instance

If your cluster on the data plane cannot communicate with the virtual private cloud (VPC) where your ASM instance resides and you want to connect the data plane and the control plane of the ASM instance over the Internet, you can associate an EIP with the SLB instance for the Istio Pilot endpoint of the control plane of the ASM instance to expose the Istio Pilot endpoint to the Internet.

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose ASM Instance > Base Information.

  3. On the right side of the Basic Information page, select Istio Pilot Endpoint and click Bind EIP.

Note

In this case, if the ASM instance is released, the EIP is also released.

Step 2: Configure network settings for the clusters and enable cross-cluster mesh proxies

You can specify a logical network for each cluster. Services on the same logical network can directly access each other. Services on different logical networks must use cross-cluster mesh proxies to access each other.

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Cluster & Workload Management > Kubernetes Clusters.

  3. Click Multi-cluster Network Configurations and complete the configuration by using the following methods:

    • Set Homing Logical Network Name to network1 for ACK 1.

    • Set Homing Logical Network Name to network2 for ACK 2 and turn on Enable Access Through Cross-cluster Mesh Proxy in ACK 2.

    image

After you apply the preceding configurations, ASM creates a default cross-cluster mesh proxy in ACK 2. This mesh proxy is associated with an EIP. Services in ACK 1 automatically use this cross-cluster mesh proxy to access services in ACK 2, and mutual transport layer security (mTLS) encryption is enabled for this communication path by default.

You can view the definition of the cross-cluster mesh proxy in the kubeconfig file of the corresponding cluster. An cross-cluster mesh proxy is named in the following format: asm-cross-network-${ACK ID}. You can adjust the configurations of a cross-cluster mesh proxy such as resources and the number of replicas based on your business requirements.

Note

A cross-cluster mesh proxy is a TCP proxy and cannot perform Layer 7 load balancing. Load imbalances may occur in some cases.

Step 3: Check cross-cluster access

The preceding network configurations take effect when the corresponding application pods are started. If an application pod is already started before you modify network configurations, you need to restart the application pod.

  1. Create the sleep application in ACK 1. Sample YAML content:

    sleep.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sleep
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sleep
      labels:
        app: sleep
        service: sleep
    spec:
      ports:
      - port: 80
        name: http
      selector:
        app: sleep
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: sleep
      template:
        metadata:
          labels:
            app: sleep
        spec:
          terminationGracePeriodSeconds: 0
          serviceAccountName: sleep
          containers:
          - name: sleep
            image: registry.cn-hangzhou.aliyuncs.com/acs/curl:8.1.2
            command: ["/bin/sleep", "infinity"]
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - mountPath: /etc/sleep/tls
              name: secret-volume
          volumes:
          - name: secret-volume
            secret:
              secretName: sleep-secret
              optional: true
    ---
  2. Create the HTTPBin application in ACK 2. Sample YAML content:

    httpbin.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: httpbin
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: httpbin
      labels:
        app: httpbin
        service: httpbin
    spec:
      ports:
      - name: http
        port: 8000
        targetPort: 80
      selector:
        app: httpbin
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: httpbin
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v1
      template:
        metadata:
          labels:
            app: httpbin
            version: v1
        spec:
          serviceAccountName: httpbin
          containers:
          - image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/httpbin:0.1.0
            imagePullPolicy: IfNotPresent
            name: httpbin
            ports:
            - containerPort: 80
  3. Access the HTTPBin application from the pod running the sleep application. (Connect to the pod based on the information in the kubeconfig file of ACK 1.)

    1. Obtain the name of the pod running the sleep application.

      kubectl get pod | grep sleep
    2. Run the curl command to access the HTTPBin application from the sleep application.

      kubectl exec ${Name of the pod running the sleep application} -- curl httpbin:8000/status/418

      The following output shows that the access is successful:

        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100   135  100   135    0     0  16075      0 --:--:-- --:--:-- --:--:-- 16875
      
          -=[ teapot ]=-
      
             _...._
           .'  _ _ `.
          | ."` ^ `". _,
          \_;`"---"`|//
            |       ;/
            \_     _/
              `"""`
  4. Verify that the sleep application uses the cross-cluster mesh proxy to access the HTTPBin application.

    1. Check the logs of the pod running the sleep application. (Connect to the pod based on the information in the kubeconfig file of ACK 1.)

      kubectl logs ${Name of the pod running the sleep application} -c istio-proxy | tail -1

      The following command output is returned:

      {"authority_for":"httpbin:8000","bytes_received":"0","bytes_sent":"135","downstream_local_address":"xxx.xxx.xxx.xx:8000","downstream_remote_address":"xx.x.xxx.xxx:xxxxx","duration":"7","istio_policy_status":"-","method":"GET","path":"/status/418","protocol":"HTTP/1.1","request_id":"08dc43e9-60c8-4f2f-910a-b727172ce311","requested_server_name":"-","response_code":"418","response_flags":"-","route_name":"default","start_time":"2024-05-23T10:06:27.289Z","trace_id":"-","upstream_cluster":"outbound|8000||httpbin.default.svc.cluster.local","upstream_host":"xxx.xx.xxx.xxx:15443","upstream_local_address":"xx.x.xxx.xxx:60248","upstream_response_time":"7","upstream_service_time":"7","upstream_transport_failure_reason":"-","user_agent":"curl/8.1.2","x_forwarded_for":"-"}

      The upstream_host field identifies the destination service directly accessed by the pod running the sleep application. The output shows that access is performed on port 15443. Port 15443 is the dedicated port of the cross-cluster mesh proxy.

    2. Check the logs of the cross-cluster mesh proxy. (Connect to the pods based on the information in the kubeconfig file of ACK 2.)

      First, obtain the pod running the cross-cluster mesh proxy.

      kubectl -n istio-system get pod | grep asm-cross-network
      
      istio-asm-cross-network-c0859be51XXX   1/1     Running   0          20h
      istio-asm-cross-network-c0859be51XXX   1/1     Running   0          20h

      The output shows that two pods are running the cross-cluster mesh proxy by default. You can check the logs of these two pods separately. Their logs are similar.

       kubectl logs istio-asm-cross-network-c0859be51XXX -n istio-system  | tail -1
      
      {"authority_for":"-","bytes_received":"xxxx","bytes_sent":"xxxx","downstream_local_address":"xx.xx.x.xx:15443","downstream_remote_address":"xx.xx.xx.xx:xxxxx","duration":"1568569","istio_policy_status":"-","method":"-","path":"-","protocol":"-","request_id":"-","requested_server_name":"outbound_.8000_._.httpbin.default.svc.cluster.local","response_code":"0","response_flags":"-","route_name":"-","start_time":"2024-05-23T08:41:16.618Z","trace_id":"-","upstream_cluster":"outbound_.8000_._.httpbin.default.svc.cluster.local","upstream_host":"xx.xx.xx.xxx:80","upstream_local_address":"xx.x.xx.xx:xxxxx","upstream_response_time":"-","upstream_service_time":"-","upstream_transport_failure_reason":"-","user_agent":"-","x_forwarded_for":"-"}