This topic describes how to configure virtual services and ASMHeaderPropagation CustomResourceDefinitions (CRDs) to implement traffic lanes and traffic shifting in permissive mode when baggage headers are used as end-to-end (E2E) pass-through request headers.
Prerequisites
A Service Mesh (ASM) instance of Enterprise Edition or Ultimate Edition is created, and the instance version is 1.21.6.54 or later. For more information, see Create an ASM instance or Update an ASM instance.
A Kubernetes cluster is added to the ASM instance. For more information, see Add a cluster to an ASM instance.
An ASM ingress gateway named ingressgateway is created. For more information, see Create an ingress gateway.
Feature introduction
Baggage is a standardized mechanism developed by OpenTelemetry to transfer context information across processes in call chains of a distributed system. To do this, you can add an HTTP header named baggage to HTTP headers. The value of the baggage header is in the key-value pair format. You can use the baggage header to transfer context data such as tenant ID, trace ID, and security credential. This way, you can use the tracing analysis and log association features without the need to modify code. Example:
baggage: userId=alice,serverNode=DF%2028,isProduction=false
Based on the context information of a baggage header, ASM can use ASMHeaderPropagation CRDs to help you pass through any request header on the service call chain and implement traffic lanes in permissive mode based on the baggage header. For more information about traffic lanes in permissive mode, see Overview of traffic lanes.
Step 1: Configure the feature that allows pods for Services to transparently transmit baggage headers
This section shows you how to use the auto-instrumentation capability of the OpenTelemetry Operator to enable pods for Services in the Kubernetes cluster to transparently transmit baggage headers.
Deploy the OpenTelemetry Operator.
Use a kubectl client to connect to the Kubernetes cluster that is added to the ASM instance. Run the following command to create a namespace named opentelemetry-operator-system:
kubectl create namespace opentelemetry-operator-system
Run the following commands to use Helm to install the OpenTelemetry Operator in the opentelemetry-operator-system namespace: For more information about how to install Helm, see Installing Helm.
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install \ --namespace=opentelemetry-operator-system \ --version=0.46.0 \ --set admissionWebhooks.certManager.enabled=false \ --set admissionWebhooks.certManager.autoGenerateCert=true \ --set manager.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-operator" \ --set manager.image.tag="0.92.1" \ --set kubeRBACProxy.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/kube-rbac-proxy" \ --set kubeRBACProxy.image.tag="v0.13.1" \ --set manager.collectorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-collector" \ --set manager.collectorImage.tag="0.97.0" \ --set manager.opampBridgeImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/operator-opamp-bridge" \ --set manager.opampBridgeImage.tag="0.97.0" \ --set manager.targetAllocatorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/target-allocator" \ --set manager.targetAllocatorImage.tag="0.97.0" \ --set manager.autoInstrumentationImage.java.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-java" \ --set manager.autoInstrumentationImage.java.tag="1.32.1" \ --set manager.autoInstrumentationImage.nodejs.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-nodejs" \ --set manager.autoInstrumentationImage.nodejs.tag="0.49.1" \ --set manager.autoInstrumentationImage.python.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-python" \ --set manager.autoInstrumentationImage.python.tag="0.44b0" \ --set manager.autoInstrumentationImage.dotnet.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-dotnet" \ --set manager.autoInstrumentationImage.dotnet.tag="1.2.0" \ --set manager.autoInstrumentationImage.go.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-go-instrumentation" \ --set manager.autoInstrumentationImage.go.tag="v0.10.1.alpha-2-aliyun" \ opentelemetry-operator open-telemetry/opentelemetry-operator
Run the following command to check whether the OpenTelemetry Operator works properly:
kubectl get pod -n opentelemetry-operator-system
Expected output:
NAME READY STATUS RESTARTS AGE opentelemetry-operator-854fb558b5-pvllj 2/2 Running 0 1m
Configure auto-instrumentation.
Create an instrumentation.yaml file that contains the following content:
apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: demo-instrumentation spec: propagators: - baggage sampler: type: parentbased_traceidratio argument: "1"
Run the following command to declare auto-instrumentation in the default namespace:
kubectl apply -f instrumentation.yaml
NoteFor the OpenTelemetry framework, deploying the OpenTelemetry Collector to collect observability data is one of the best practices. The steps for deploying the OpenTelemetry Collector are not described here. For more information about how to collect ASM tracing data to Managed Service for OpenTelemetry, see Collect ASM tracing data to Managed Service for OpenTelemetry.
Step 2: Deploy sample Services
Enable automatic sidecar proxy injection for the default namespace. For more information, see Manage global namespaces.
NoteFor more information about automatic sidecar proxy injection, see Enable automatic sidecar proxy injection.
Create a mock.yaml file that contains the following content:
Annotations
instrumentation.opentelemetry.io/inject-java: "true"
andinstrumentation.opentelemetry.io/container-names: "default"
are added to each Service pod to declare that the corresponding Service is implemented in Java language, and the OpenTelemetry Operator is required to auto-instrument the container nameddefault
.Run the following command to deploy the Services:
kubectl apply -f mock.yaml
Based on the auto-instrumentation mechanism of OpenTelemetry, pods for Services can automatically pass through baggage headers in call chains.
Step 3: Create a destination rule and an ASMHeaderPropagation CRD to implement traffic lanes in permissive mode
Create a destination rule.
Create a dr-mock.yaml file that contains the following content:
The preceeding YAML file indicates that Services mocka, mockb, and mockc are classified into versions v1, v2, and v3 based on the
version
labels of the pods for the Services. The Service mocka has versions v1, v2, and v3, the mockb Service has versions v1 and v3 versions, and the mockc Service has versions v1 and v2.Use kubectl to connect to the ASM instance and run the following command to create a destination rule:
kuebctl apply -f dr-mock.yaml
Create an ASMHeaderPropagation CRD. If the pods for Services can pass through baggage headers, you can use the ASMHeaderPropagation CRD to specify the custom request headers that you want to pass through on the call chain in the context of baggage headers.
Create a propagation.yaml file that contains the following content. The following file shows that the request header named
version
is passed through on the call chain.apiVersion: istio.alibabacloud.com/v1beta1 kind: ASMHeaderPropagation metadata: name: version-propagation spec: headers: - version
Use kubectl to connect to the ASM instance and run the following command to create the ASMHeaderPropagation CRD:
kuebctl apply -f propagation.yaml
Create a virtual service.
Create a vs-mock.yaml file that contains the following content:
The preceding YAML file creates routing rules for the lanes in the call chain of mocka -> mockb -> mockc. Specifically, requests are forwarded to the corresponding version by matching the content of the version request header that is passed through on the call chain. For example, requests with the
version: v2
headers are sent to the Services of version v2. In addition, the virtual service also specifies the following traffic shifting rule: When the corresponding version of a Service in the call chain does not exist, requests are shifted to version v1 of the Service.Use kubectl to connect to the ASM instance and run the following command to create the virtual service:
kubectl apply -f vs-mock.yaml
Create traffic routing rules on the ingress gateway.
Create a gw-mock.yaml file that contains the following content:
The preceding YAML file shows the traffic routing rules created on the ingress gateway for the call chain of mocka -> mockb -> mockc. The traffic is routed by weight. Traffic sent to the ingress gateway is forwarded to the versions v1, v2, and v3 of the mocka Service at a ratio of 4:3:3. When the ingress gateway forwards a request to the mocka Service, the ingress gateway adds a version header to the request based on the version of the destination Service to ensure that the corresponding version of Services in the lane exists in the call chain.
Use kubectl to connect to the ASM instance and run the following command to create traffic routing rules:
kubectl apply -f gw-mock.yaml
Step 4: Verify that traffic lanes take effect
Obtain the public IP address of the ingress gateway. For more information, see Step 2: Obtain the IP address of the ASM ingress gateway.
Run the following command to configure environment variables. xxx.xxx.xxx.xxx is the IP address obtained in the previous step.
export ASM_GATEWAY_IP=xxx.xxx.xxx.xxx
Check whether the end-to-end canary release feature takes effect.
Run the following command to check whether the lane for Services of version v1 takes effect:
for i in {1..100}; do curl http://${ASM_GATEWAY_IP} ; echo ''; sleep 1; done;
Expected output:
-> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v2, ip: 192.168.1.28)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v2, ip: 192.168.1.1) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v2, ip: 192.168.1.28)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v2, ip: 192.168.1.1) -> mocka(version: v2, ip: 192.168.1.28)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v2, ip: 192.168.1.1) -> mocka(version: v2, ip: 192.168.1.28)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v2, ip: 192.168.1.1) -> mocka(version: v2, ip: 192.168.1.28)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v2, ip: 192.168.1.1) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v2, ip: 192.168.1.28)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v2, ip: 192.168.1.1) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v2, ip: 192.168.1.28)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v2, ip: 192.168.1.1) -> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14)
The output indicates that traffic is sent to the versions v1, v2, and v3 of the Services in the call chain at a ratio of about 4:3:3, and v1 is used as the baseline version. If a specific version of a Service does not exist in the call chain, the version v1 of this Service is called.