If you want to optimize your network topology, scale out application servers, or throttle user traffic, you can use Traffic Management Center in the Service Mesh (ASM) console to smoothly migrate TCP traffic. This can ensure business continuity and high availability of services. This topic describes how to migrate TCP traffic from one version of an application to another version based on the TCP Traffic Shifting example given by Istio. In this example, an application named tcp-echo has two versions: v1 and v2. In version v1, the application adds the prefix "one" to the timestamps in responses before it returns responses. In version v2, the application adds the prefix "two" to the timestamps in responses before it returns responses. This way, you can adjust the traffic splitting policy based on the traffic migration results to meet specific business requirements and performance targets.
Prerequisites
The following services are activated:
An ACK cluster is created. For more information, see Create an ACK dedicated cluster and Create an ACK managed cluster.
The ACK cluster is added to your ASM instance. For more information, see Create an ASM instance and Add a cluster to an ASM instance.
Step 1: Deploy the two versions of the sample application
Deploy the two versions of the tcp-echo application.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage and choose in the left-side navigation pane.
At the top of the Deployments page, select the namespace where you want to deploy the two versions of the tcp-echo application from the Namespace drop-down list, and click Create from YAML in the upper-right corner.
Select Custom from the Sample Template drop-down list, copy the following YAML code to the Template code editor, and then click Create.
Return to the Deployments page. Then, you can find the two versions of the tcp-echo application.
Create a service named tcp-echo and expose the service.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage and choose in the left-side navigation pane.
At the top of the Services page, select the namespace where you want to create the service from the Namespace drop-down list, and click Create in the upper-right corner.
In the Create Service dialog box, configure the following parameters and click OK.
Parameter
Description
Name
The name of the service. In this example, the name is set to tcp-echo.
Service Type
The type of the service, which specifies how the service is exposed. Valid values: Cluster IP, Node Port, and Server Load Balancer.
NoteThe Headless Service check box appears only when you set the Service Type parameter to Cluster IP. If you select this check box, you can use a headless service to interface with other service discovery mechanisms, instead of being tied to the implementation of service discovery in Kubernetes.
Backend
The Deployment to be associated with the service. In this example, the Name parameter is set to app and the Value parameter is set to tcp-echo-v1.
NoteThe service uses the
app
label of the associated Deployment as the selector to determine to which Deployment the traffic is routed. The tcp-echo-v1 and tcp-echo-v2 Deployments share the same app label, which isapp:tcp-echo
. Therefore, the service can be associated with either one of the two Deployments.External Traffic Policy
You can select Local or Cluster.
NoteThe External Traffic Policy parameter appears only when you set the service type to Node Port or Server Load Balancer.
Port Mapping
In this example, the Name parameter is set to tcp, the Service Port and Container Port parameters are set to 9000, and the Protocol parameter is set to TCP.
Annotations
You can add annotations to a service to configure load balancing. For example, the
service.beta.kubernetes.io/alicloud-loadbalancer-bandwidth:20
annotation specifies that the maximum bandwidth of the service is 20 Mbit/s. This limits the amount of traffic that flows through the service. For more information about this parameter, see Add annotations to the YAML file of a Service to configure CLB instances.Label
You can add one or more labels to a service to identify the service.
After the tcp-echo service is created, you can see the service on the Services page.
Step 2: Configure a routing rule
You can create an Istio gateway, a virtual service, and a destination rule for the ASM instance to route all traffic to version v1 of the tcp-echo application.
Log on to the ASM console. In the left-side navigation pane, choose .
On the Mesh Management page, find the ASM instance that you want to configure. Click the name of the ASM instance or click Manage in the Actions column.
Create an Istio gateway.
On the details page of the ASM instance, choose in the left-side navigation pane. On the page that appears, click Create from YAML.
On the Create page, select default from the Namespace drop-down list, select a template from the Template drop-down list, copy the following YAML code to the YAML code editor, and then click Create.
apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: tcp-echo-gateway spec: selector: istio: ingressgateway servers: - port: number: 31400 name: tcp protocol: TCP hosts: - "*"
Create a virtual service.
On the details page of the ASM instance, choose in the left-side navigation pane. On the page that appears, click Create from YAML.
On the Create page, select default from the Namespace drop-down list, select a template from the Template drop-down list, copy the following YAML code to the YAML code editor, and then click Create.
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: tcp-echo spec: hosts: - "*" gateways: - tcp-echo-gateway tcp: - match: - port: 31400 route: - destination: host: tcp-echo port: number: 9000 subset: v1
Create a destination rule.
On the details page of the ASM instance, choose in the left-side navigation pane. On the page that appears, click Create from YAML.
On the Create page, select default from the Namespace drop-down list, select a template from the Template drop-down list, copy the following YAML code to the YAML code editor, and then click Create.
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tcp-echo-destination spec: host: tcp-echo subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2
Step 3: Deploy an ingress gateway
Add port 31400 to the ingress gateway and map the port to port 31400 of the Istio gateway.
Log on to the ASM console. In the left-side navigation pane, choose .
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose .
On the Ingress Gateway page, click Create. On the Create page, configure the parameters and then click Create.
The following table describes some of the parameters. For more information, see Create an ingress gateway.
Parameter
Description
Cluster
The cluster in which you want to deploy the ingress gateway.
CLB Instance Type
The access type of the CLB instance. In this example, Internet Access is selected.
Create a CLB Instance or Use Existing CLB Instance
Use Existing CLB Instance: Select an existing CLB instance from the drop-down list.
Create a CLB Instance: Click Create a CLB Instance and select the CLB instance specifications that you need from the drop-down list.
NoteWe recommend that you select a CLB instance for each Kubernetes Service. If multiple Kubernetes Services share the same CLB instance, the following risks and limits exist:
If you configure a Kubernetes Service to use a CLB instance that is already used by another Kubernetes Service, the existing listeners of the CLB instance are forcibly overwritten. This may interrupt the original Kubernetes Service.
If you create a CLB instance when you create a Kubernetes Service, the CLB instance cannot be shared among Kubernetes Services. Only CLB instances that you create in the CLB console or by calling API operations can be shared.
Kubernetes Services that share the same CLB instance must use different frontend listening ports. Otherwise, port conflicts may occur.
If multiple Kubernetes Services share the same CLB instance, listener names and vServer group names are used as unique identifiers in Kubernetes. Do not modify the names of listeners or vServer groups.
You cannot share a CLB instance across clusters.
Port Mapping
You can click Add Port and specify the protocol and service port in the row that appears. In this example, the protocol is set to TCP and the service port is set to 31400.
Step 4: Verify the result
Use a kubectl client to check whether all TCP traffic is routed to the v1 version of the tcp-echo application.
Use the kubectl client to connect to the ACK cluster. For information, see Step 2: Select a type of cluster credentials.
Run the following commands to query the IP address and port number of the tcp-echo service:
$ export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') $ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}')
Run the
telnet
command to connect to the tcp-echo service.$ telnet $INGRESS_HOST $INGRESS_PORT Trying xxx.xxx.xxx.xxx... Connected to xxx.xxx.xxx.xxx. Escape character is '^]'
Enter a string and press Enter.
If the returned string is prefixed with "one", the tcp-echo application is deployed and all the service traffic is routed to the v1 version of the tcp-echo application.
Step 5: Migrate a proportion of traffic to the tcp-echo-v2 version
In this example, 20% of the traffic is routed to the tcp-echo-v2 version and the remaining 80% is routed to the tcp-echo-v1 version.
Modify the configuration of the virtual service of the ASM instance.
On the details page of the ASM instance, choose in the left-side navigation pane.
On the VirtualService page, find the tcp-echo service and click YAML in the Actions column.
In the Edit dialog box, copy the following YAML content to the code editor and click OK.
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: tcp-echo spec: hosts: - "*" gateways: - tcp-echo-gateway tcp: - match: - port: 31400 route: - destination: host: tcp-echo port: number: 9000 subset: v1 weight: 80 - destination: host: tcp-echo port: number: 9000 subset: v2 weight: 20
Run the following command to send 10 requests to the tcp-echo service:
$ for i in {1..10}; do \ docker run -e INGRESS_HOST=$INGRESS_HOST -e INGRESS_PORT=$INGRESS_PORT -it --rm busybox sh -c "(date; sleep 1) | nc $INGRESS_HOST $INGRESS_PORT"; \ done one Mon Nov 12 23:38:45 UTC 2018 two Mon Nov 12 23:38:47 UTC 2018 one Mon Nov 12 23:38:50 UTC 2018 one Mon Nov 12 23:38:52 UTC 2018 one Mon Nov 12 23:38:55 UTC 2018 two Mon Nov 12 23:38:57 UTC 2018 one Mon Nov 12 23:39:00 UTC 2018 one Mon Nov 12 23:39:02 UTC 2018 one Mon Nov 12 23:39:05 UTC 2018 one Mon Nov 12 23:39:07 UTC 2018
The preceding output indicates that 20% of the traffic is routed to the tcp-echo-v2 version.
NoteIf you send 10 requests in a test, the traffic may not always be routed to the tcp-echo-v1 and tcp-echo-v2 versions at the specified ratio. However, the actual ratio is close to 80:20 when the sample size increases.