All Products
Search
Document Center

Simple Log Service:Collect data by using the non-intrusive monitoring feature

Last Updated:Sep 03, 2024

The data plane monitoring feature provides the non-intrusive monitoring capabilities that are developed by the OpenAnolis community and Simple Log Service. You can use the feature to analyze data flow and identify bottleneck issues for Kubernetes clusters in cloud-native scenarios.

Prerequisites

  • A Full-stack Observability instance is created. For more information, see Create an instance.

  • The required monitoring component is installed if you want to use the Log Service console to collect data. For more information, see Install a monitoring component.

Limits

If you want to enable data plane monitoring, the operating system of the host must be Linux x86_64 or CentOS of a version between 7.6 and 7.9. The kernel version of the Linux operating system must be 4.19 or later. The kernel version of the CentOS operating system must be 3.1.0. You can run the uname -r command to view the kernel information of the operating system.

Collect data by using the Log Service console

  1. Log on to the Simple Log Service console.

  2. In the Log Application section, click the Intelligent O&M tab. Then, click Full-stack Observability.

  3. On the Simple Log Service Full-stack Observability page, click the instance that you want to manage.

  4. In the left-side navigation pane, click Full-stack Monitoring.

    The first time you use the full-stack monitoring feature of an instance, click Enable.

  5. In the left-side navigation pane, click Data Import. On the Data Import Configurations page, find the Non-intrusive Service Observation switch in the Kubernetes Monitoring section.

    The first time you create a Logtail configuration for host monitoring data, turn on the switch to go to the configuration page. If you created a Logtail configuration, click the 创建 icon to go to the configuration page.

  6. Create a machine group.

    If a machine group is already created, skip this step.

Collect data by using the command line interface (CLI)

  1. Download the custom resource definition (CRD) generation tool.

    Method

    Description

    Install the CRD template tool outside a cluster

    If you want to install the CRD template tool outside a cluster, make sure that the ~/.kube/config configuration file exists in the logon account. The configuration file includes settings that allow you to perform management operations on the cluster. You can run the kubectl command to perform related operations.

    Install the CRD template tool in a cluster

    If you want to install the CRD template tool in a container, the system creates CRDs based on the permissions of an installed component named alibaba-log-controller. If the ~/.kube/config configuration file does not exist or connection failures occur due to poor network conditions, you can use this method to install the CRD template tool.

    Install the CRD template tool outside a cluster

    1. Log on to a cluster and download the CRD template tool.

      • China

        curl https://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/kubernetes/crd-tool.tar.gz -o /tmp/crd-tool.tar.gz
      • Outside China

        curl https://logtail-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/kubernetes/crd-tool.tar.gz -o /tmp/crd-tool.tar.gz
    2. Install the CRD template tool. After the tool is installed, sls-crd-tool is generated in the folder in which the CRD template tool is installed.

      tar -xvf /tmp/crd-tool.tar.gz -C /tmp &&chmod 755 /tmp/crd-tool/install.sh  && sh -x  /tmp/crd-tool/install.sh
    3. Run the ./sls-crd-tool list command to check whether the tool is installed. If a value is returned, the tool is installed.

    Install the CRD template tool in a container

    1. Log on to a cluster and access the alibaba-log-controller container.

      kubectl get pods -n kube-system -o wide |grep alibaba-log-controller | awk -F ' ' '{print $1}'
      kubectl exec -it {pod} -n kube-system bash
      cd ~
    2. Download the CRD template tool.

      • If you can download resources in the cluster over the Internet, run one of the following commands to download the CRD template tool.

        • China

          curl https://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/kubernetes/crd-tool.tar.gz -o /tmp/crd-tool.tar.gz
        • Outside China

          curl https://logtail-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/kubernetes/crd-tool.tar.gz -o /tmp/crd-tool.tar.gz
      • If you cannot download resources in the cluster over the Internet, you can download the CRD template tool outside the cluster. Then, run the kubectl cp <source> <destination> command or use the file upload feature of ACK to upload the CRD template tool to the container.

    3. Install the CRD template tool. After the tool is installed, sls-crd-tool is generated in the folder in which the CRD template tool is installed.

      tar -xvf /tmp/crd-tool.tar.gz -C /tmp &&chmod 755 /tmp/crd-tool/install.sh  && sh -x  /tmp/crd-tool/install.sh
    4. Run the ./sls-crd-tool list command to check whether the tool is installed. If a value is returned, the tool is installed.

  2. Use the CRD generation tool to generate a Logtail configuration.

    1. Run the following command to view the definition of the template:

      ./sls-crd-tool  get ebpfK8sPlugin
    2. Replace the REQUIRED parameter with the current instance ID and run the following command to preview the value of the parameter:

      ./sls-crd-tool  apply -f template-ebpfK8sPlugin.yaml --create=false
    3. Check whether the project parameter specifies the project to which the current instance belongs. If so, run the following command to deploy the template file to collect data:

      ./sls-crd-tool  apply -f template-ebpfK8sPlugin.yaml
    4. Go to the Data Import Configurations page. The Logtail configuration is generated, as shown in the following figure. The number next to Configurations in the Resource Monitoring section is incremented by one. If the system fails to generate the Logtail configuration, the number remains unchanged.

      image.png

Collect data by using the Simple Log Service console

  1. Click Use Existing Machine Groups.

    After the monitoring component is installed, Simple Log Service automatically creates a machine group whose name is in the k8s-group-${your_k8s_cluster_id} format. You can use this machine group.

  2. Select the k8s-group-${your_k8s_cluster_id} machine group from Source Server Groups and move the machine group to Applied Server Groups. Then, click Next.

    Important

    If the heartbeat status of the machine group is FAIL, you can click Automatic Retry. If the issue persists, see What do I do if a Logtail machine group has no heartbeats?

  3. In the Specify Data Source step, configure the parameters and click Complete. The following table describes the parameters.

    Parameter

    Description

    General Settings

    Config Name

    The name of the Logtail configuration. You can specify a custom name.

    Cluster

    The custom name of the cluster.

    After you configure this parameter, Log Service adds a tag in the cluster=Cluster name format to the monitoring data that is collected by using the Logtail configuration.

    Important

    Make sure that the cluster name is unique. Otherwise, data conflicts may occur.

    Monitor Application Layer Protocols

    If you turn on Monitor Application Layer Protocols, Logtail parses the network protocol of the application layer, such as HTTP, MySQL, or Redis.

    Statistical Interval of Network Metrics

    The interval at which Layer 4 network data is aggregated. The data that is generated within the interval is aggregated, and the aggregation result is returned. Unit: seconds. We recommend that you specify an interval that is less than or equal to 600 seconds.

    Statistical Interval of Protocol Metrics

    The interval at which Layer 7 network data is aggregated. The data that is generated within the interval is aggregated, and the aggregation result is returned. Unit: seconds. We recommend that you specify an interval that is less than or equal to 60 seconds.

    Protocol Sample Rate

    The sample rate of network data. Only Layer 7 network data is filtered. The sample rate does not affect statistics.

    Protocol Whitelist

    The application layer protocol that you want to parse.

    Kubernetes Selector

    Namespace Whitelist

    Configure a namespace-name regular expression to specify the namespaces whose data you want to collect.

    Namespace Blacklist

    Configure a namespace-name regular expression to specify the namespaces whose data you do not want to collect.

    Pod Whitelist

    Configure a pod-name regular expression to specify the pods whose data you want to collect.

    Pod Blacklist

    Configure a pod-name regular expression to specify the pods whose data you do not want to collect.

    Container Whitelist

    Configure a container-name regular expression to specify the containers whose data you want to collect.

    Container Blacklist

    Configure a container-name regular expression to specify the containers whose data you do not want to collect.

    Label Whitelist

    The container label whitelist. The whitelist specifies the containers from which logs are collected.

    Set LabelKey to the name of the label and LabelValue to a regular expression. For example, if you set LabelKey to io.kubernetes.container.name and LabelValue to ^(nginx|cube)$, logs are collected from a container named nginx and a container named cube.

    Key-value pairs are in the logical OR relation. If a label in the key-value pair format of a container matches one of the specified key-value pairs, the logs of the container are collected.

    Label Blacklist

    The container label blacklist. The blacklist specifies the containers from which logs are not collected.

    Set LabelKey to the name of the label and LabelValue to a regular expression. For example, if you set LabelKey to io.kubernetes.container.name and LabelValue to ^(nginx|cube)$, logs are not collected from a container named nginx or a container named cube.

    Key-value pairs are in the logical OR relation. If a label in the key-value pair format of a container matches one of the specified key-value pairs, the logs of the container are not collected.

    Environment Variable Whitelist

    The environment variable whitelist. The whitelist specifies the containers from which logs are collected.

    Set EnvKey to the name of the environment variable and EnvValue to a regular expression. For example, if you set EnvKey to NGINX_SERVICE_PORT and EnvValue to ^(80|6379)$, logs are collected from containers whose port number is 80 and containers whose port number is 6379.

    Key-value pairs are in the logical OR relation. If an environment variable in the key-value pair format of a container matches one of the specified key-value pairs, the logs of the container are collected.

    Environment Variable Blacklist

    The environment variable blacklist. The blacklist specifies the containers from which logs are not collected.

    Set EnvKey to the name of the environment variable and EnvValue to a regular expression. For example, if you set EnvKey to NGINX_SERVICE_PORT and EnvValue to ^(80|6379)$, logs are not collected from the containers whose port number is 80 or 6379.

    Key-value pairs are in the logical OR relation. If an environment variable in the key-value pair format of a container matches one of the specified key-value pairs, the logs of the container are not collected.

    Advanced configurations

    Drop Local Packets

    If you turn on Drop Local Packets, Logtail drops the packets of inbound requests that are sent from INET domain sockets.

    Drop Unix Packets

    If you turn on Drop Unix Packets, Logtail drops the packets of requests that are sent from Unix domain sockets.

    In most cases, Unix domain sockets are used for local data transmission.

    Drop Unknown Packets

    If you turn on Drop Unknown Packets, Logtail drops the packets of requests that are not sent from INET domain sockets or Unix domain sockets.

    Read Interval of Container Data

    The interval at which you want to read container metadata. Unit: seconds. We recommend that you specify an interval that is less than or equal to 60 seconds.

    Read Interval of Socket Data

    The interval at which you want to read socket metadata. Unit: seconds. We recommend that you specify an interval that is less than or equal to 30 seconds.

    Protocol Aggregation Window

    The size of the process-level data aggregation window within the statistical interval of protocol metrics. This parameter is used to control resource consumption and prevent the Logtail memory increase issue caused by a large number of different calls. The default value for clients is 500, and the default value for servers is 5000.

    After you configure the settings, Log Service automatically creates assets such as Metricstores. For more information, see Assets.

What to do next

After monitoring data of Kubernetes data planes is collected to Log Service, the Full-stack Monitoring application automatically creates dedicated dashboards for the monitoring data. You can use the dashboards to analyze the monitoring data. For more information, see View dashboards.

Create a machine group. If a machine group is already created, skip this step. Create a machine group for an ACK cluster. For more information, see Create an IP address-based machine group. Create a machine group for a self-managed cluster. For more information, see Create a custom identifier-based machine group.