All Products
Search
Document Center

Alibaba Cloud Service Mesh:Collect ASM tracing data to Managed Service for OpenTelemetry

Last Updated:Jul 12, 2024

Service Mesh (ASM) allows you to report metrics to Managed Service for OpenTelemetry. You can view the call information and the topology generated based on the call information in the Managed Service for OpenTelemetry console. This topic describes how to collect ASM tracing data to Managed Service for OpenTelemetry.

Prerequisites

Procedure

Perform the following steps based on the version of your ASM instance:

For ASM instances whose versions are earlier than 1.17.2.35

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose ASM Instance > Base Information.

  3. On the Base Information page, click Settings. In the Settings Update panel, select Enable Tracing Analysis, set Sampling Percentage, select Enable Managed Service for OpenTelemetry for Sampling Method, and then click OK.

  4. In the left-side navigation pane, choose Observability Management Center > Tracing Analysis. You are redirected to the Managed Service for OpenTelemetry console. In the console, you can find the ASM tracing data.链路追踪.png

    For more information about Managed Service for OpenTelemetry, see What is Managed Service for OpenTelemetry?

Note

If you no longer need to use this feature, clear Enable Tracing Analysis in the Settings Update panel and click OK.

For ASM instances whose versions are 1.17.2.35 or later and earlier than 1.18.0.124

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Tracing Analysis.

  3. On the Tracing Analysis page, click Collect ASM Tracing Data to Managed Service for OpenTelemetry. In the Submit message, click OK.

  4. Click Open the Managed Service for OpenTelemetry Console to view the ASM tracing data.

    For more information about Managed Service for OpenTelemetry, see What is Managed Service for OpenTelemetry?链路追踪.png

Note

If you no longer need to use this feature, click Disable Collection on the Tracing Analysis page. In the Submit message, click OK.

For ASM instances whose versions are 1.18.0.124 or later

Step 1: Deploy the OpenTelemetry Operator

  1. Use kubectl to connect to the ACK cluster based on the information in the kubeconfig file. Then, run the following command to create the opentelemetry-operator-system namespace:

    kubectl create namespace opentelemetry-operator-system
  2. Run the following commands to use Helm to install the OpenTelemetry Operator in the opentelemetry-operator-system namespace:

    helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    helm install --namespace=opentelemetry-operator-system opentelemetry-operator open-telemetry/opentelemetry-operator \
      --set "manager.collectorImage.repository=otel/opentelemetry-collector-k8s" \
      --set admissionWebhooks.certManager.enabled=false \
      --set admissionWebhooks.autoGenerateCert.enabled=true
  3. Run the following command to check whether the OpenTelemetry Operator works properly:

    kubectl get pod -n opentelemetry-operator-system

    Expected output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    opentelemetry-operator-854fb558b5-pvllj   2/2     Running   0          1m

    The output indicates that the status is running. This means that the OpenTelemetry Operator works properly.

Step 2: Create an OpenTelemetry Collector

  1. Use the content in the following code block to create a collector.yaml file.

    Replace ${ENDPOINT} in the YAML file with a virtual private cloud (VPC) endpoint supporting the gRPC protocol. Replace ${TOKEN} with the authentication token. For more information about how to obtain the endpoints supported by Managed Service for OpenTelemetry and authentication tokens, see Connect to Managed Service for OpenTelemetry and authenticate clients.

    Show the collector.yaml file

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      labels:
        app.kubernetes.io/managed-by: opentelemetry-operator
      name: default
      namespace: opentelemetry-operator-system
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      config: |
        extensions:
          zpages:
            endpoint: 0.0.0.0:55679 
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: 0.0.0.0:4317
        exporters:
          debug:
            verbosity: detailed
          otlp:
            endpoint: ${ENDPOINT}
            tls:
              insecure: true
            headers:
              Authentication: ${TOKEN}
        service:
          extensions: [zpages]
          pipelines:
            traces:
              receivers: [otlp]
              processors: []
              exporters: [otlp, debug]
      ingress:
        route: {}
      managementState: managed
      mode: deployment
      observability:
        metrics: {}
      podDisruptionBudget:
        maxUnavailable: 1
      replicas: 1
      resources: {}
      targetAllocator:
        prometheusCR:
          scrapeInterval: 30s
        resources: {}
      upgradeStrategy: automatic
    
  2. Use kubectl to connect to the ACK cluster based on the information in the kubeconfig file, and then run the following command to deploy the OpenTelemetry Collector in the cluster:

    kubectl apply -f collector.yaml
  3. Run the following command to check whether the OpenTelemetry Collector is started:

    kubectl get pod -n opentelemetry-operator-system

    Expected output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    opentelemetry-operator-854fb558b5-pvllj   2/2     Running   0          3m
    default-collector-5cbb4497f4-2hjqv        1/1     Running   0          30s

    The output indicates that the OpenTelemetry Collector starts normally.

  4. Run the following command to check whether a service is created for the OpenTelemetry Collector:

    kubectl get svc -n opentelemetry-operator-system

    Expected output:

    opentelemetry-operator           ClusterIP   172.16.138.165   <none>        8443/TCP,8080/TCP   3m
    opentelemetry-operator-webhook   ClusterIP   172.16.127.0     <none>        443/TCP             3m
    default-collector              ClusterIP   172.16.145.93    <none>        4317/TCP   30s
    default-collector-headless     ClusterIP   None             <none>        4317/TCP   30s
    default-collector-monitoring   ClusterIP   172.16.136.5     <none>        8888/TCP   30s

    The output indicates that a service is created for the OpenTelemetry Collector.

Step 3: Enable Managed Service for OpenTelemetry in the ASM console

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.

  3. In the Tracing Analysis Settings section of the Observability Settings page, set Sampling Percentage to 100 and click Submit.

  4. In the left-side navigation pane, choose Observability Management Center > Tracing Analysis. In the OpenTelemetry Service Address/Domain Name field, enter default-collector.opentelemetry-operator-system.svc.cluster.local. Enter 4317 in the OpenTelemetry Service Port field, and then click Collect ASM Tracing Data to Managed Service for OpenTelemetry.

Step 4: Deploy test applications

Deploy the Bookinfo and sleep applications. For more information, see Deploy an application in an ACK cluster that is added to an ASM instance.

  • bookinfo.yaml

  • sleep.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sleep
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sleep
      labels:
        app: sleep
        service: sleep
    spec:
      ports:
      - port: 80
        name: http
      selector:
        app: sleep
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: sleep
      template:
        metadata:
          labels:
            app: sleep
        spec:
          terminationGracePeriodSeconds: 0
          serviceAccountName: sleep
          containers:
          - name: sleep
            image: registry.cn-hangzhou.aliyuncs.com/acs/curl:8.1.2
            command: ["/bin/sleep", "infinity"]
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - mountPath: /etc/sleep/tls
              name: secret-volume
          volumes:
          - name: secret-volume
            secret:
              secretName: sleep-secret
              optional: true
    ---

Step 5: Access the applications and view the reported tracing data

  1. Run the following command to access the Productpage application:

    kubectl exec -it deploy/sleep -c sleep -- curl  productpage:9080/productpage?u=normal
  2. After the Productpage application is accessed, view the logs of the OpenTelemetry Collector and the output printed by the exporter.

    2023-11-20T08:44:27.531Z	info	TracesExporter	{"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 3}
  3. View the tracing data in the Application Real-Time Monitoring Service (ARMS) console.

    1. Log on to the ARMS console.
    2. In the left-side navigation pane, choose Application Monitoring > Trace Explorer. In the upper-left corner of the page, select the desired region.

    3. In the Service Name section, select the sleep application that initiated the request. The tracing data of the sleep application is displayed on the right.

      A sidecar proxy is injected into the sleep application. When the sleep application initiates a request to access other services, ASM considers the sidecar proxy as the egress gateway.

      image.png

    4. Find the desired trace ID and click Details in the Actions column to view the complete trace and latency of the call.

      image.png