All Products
Search
Document Center

Simple Log Service:Collect text logs from Kubernetes containers in Sidecar mode

Last Updated:Nov 19, 2024

If you want to use a separate Logtail process to collect logs from all containers in a pod, you can install Logtail in a Kubernetes cluster in Sidecar mode. This topic describes the implementation, limits, prerequisites, and procedure of collecting container text logs in Sidecar mode.

Implementation

image

Sidecar mode

  • In Sidecar mode, each pod runs a Logtail container. You can use Logtail to collect logs from all containers in the pod. In this case, log collection from each pod is isolated.

  • To ensure that Logtail can collect logs from other containers in a pod, make sure that the Logtail container and application containers share the same volume. For more information about how to collect container logs in Sidecar mode, see Sidecar container with a logging agent and Pods with multiple containers. For more information about volumes, see Storage basics.

Prerequisites

  • Ports 80 (HTTP) and 443 (HTTPS) for outbound traffic are enabled for the server on which Logtail is installed. If the server is an Elastic Computing Service (ECS) instance, you can reconfigure the related security group rules to enable the ports. For more information about how to configure a security group rule, see Add a security group rule.

  • Logs are continuously generated in the container from which you want to collect logs. Logtail collects only incremental logs. If a log file on your server is not updated after a Logtail configuration is delivered and applied to the server, Logtail does not collect logs from the file. For more information, see Read log files.

  • The files from which you want to collect logs are stored in the volume that is mounted to the required Logtail container.

Step 1: Inject a Logtail container into a business pod

  1. Log on to your Kubernetes cluster.

  2. Create a YAML file. In the following command, sidecar.yaml is a sample file name. You can specify a different file name based on your business requirements.

    vim sidecar.yaml
  3. Enter the following script in the YAML file and configure the parameters based on your business requirements.

    Warning

    In the following YAML template, replace all placeholders in the ${} format with actual values. Do not modify or delete other parameters.

    YAML template

    apiVersion: batch/v1
    kind: Job
    metadata:
      # Add Job metadata, such as the name and namespace.
      name: ${job_name}
      namespace: ${namespace}
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
            # Configure settings for an application container.
            - name: ${main_container_name}
              image: ${main_container_image}
              command: ["/bin/sh", "-c"]
              args:
                - until [[ -f /tasksite/cornerstone ]]; do sleep 1; done;
                  # Replace the command variable with the actual startup command of the application container.
                  ${container_start_cmd};
                  retcode=$?;
                  touch /tasksite/tombstone;
                  exit $retcode
              volumeMounts:
                # Mount the log directory of the application container to the shared volume.
                - name: ${shared_volume_name}
                  mountPath: ${dir_containing_your_files}
                # Create a mount target to interact with the Logtail container.
                - mountPath: /tasksite
                  name: tasksite
             
            # Configure settings for the Logtail container, which is a sidecar container.
            - name: logtail
              image: ${logtail_image}
              command: ["/bin/sh", "-c"]
              args:
                - /etc/init.d/ilogtaild start;
                  sleep 10; # Wait until the Logtail configuration is downloaded.
                  touch /tasksite/cornerstone;
                  until [[ -f /tasksite/tombstone ]]; do sleep 1; done;
                  sleep 10; # Wait until Logtail finishes sending logs.
                  /etc/init.d/ilogtaild stop;
              livenessProbe:
                exec:
                  command:
                    - /etc/init.d/ilogtaild
                    - status
                initialDelaySeconds: 30
                periodSeconds: 30
              env:
                # Specify a time zone. Specify the time zone in the format of Region/City based on the region where the Kubernetes cluster resides. For example, if your cluster resides in the Chinese mainland, set the time zone to Asia/Shanghai. 
                # If the specified time zone is invalid, the time labels of raw logs and processed logs may not match. As a result, logs may be archived based on an incorrect point in time. 
                - name: TZ   
                  value: "${timezone}"
                - name: ALIYUN_LOGTAIL_USER_ID
                  value: "${your_aliyun_user_id}"
                - name: ALIYUN_LOGTAIL_USER_DEFINED_ID
                  value: "${your_machine_group_user_defined_id}"
                - name: ALIYUN_LOGTAIL_CONFIG
                  value: "/etc/ilogtail/conf/${your_region_config}/ilogtail_config.json"
                # Specify the pod environment information as log labels.
                - name: "ALIYUN_LOG_ENV_TAGS"
                  value: "_pod_name_|_pod_ip_|_namespace_|_node_name_|_node_ip_"
                # Obtain the pod and node information.
                - name: "_pod_name_"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: "_pod_ip_"
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: "_namespace_"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: "_node_name_"
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                - name: "_node_ip_"
                  valueFrom:
                    fieldRef:
                      fieldPath: status.hostIP
              volumeMounts:
                # Mount the log directory of the Logtail container to the shared volume.
                - name: ${shared_volume_name}
                  mountPath: ${dir_containing_your_files}
                # Create a mount target to interact with the application container.
                - mountPath: /tasksite
                  name: tasksite
          volumes:
            # Define a shared volume that is empty for log storage.
            - name: ${shared_volume_name}
              emptyDir: {}
            # Define a volume for containers to communicate with each other.
            - name: tasksite
              emptyDir:
                medium: Memory
    

    Key parameters

    Variable

    Description

    ${your_aliyun_user_id}

    The ID of your Alibaba Cloud account. For more information, see Configure a user identifier.

    ${your_machine_group_user_defined_id}

    The custom identifier of your machine group. Example: nginx-log-sidecar.

    Important

    The custom identifier must be unique in the region where your project resides.

    ${your_region_config}

    The region ID and network type of your project. For more information about regions, see Install Logtail on a Linux server.

    • If logs are collected to your project over the Internet, specify the value in the region-internet format. For example, if your project resides in the China (Hangzhou) region, specify cn-hangzhou-internet.

    • If logs are collected to your project over an internal network of Alibaba Cloud, specify the value in the region format. For example, if your project resides in the China (Hangzhou) region, specify cn-hangzhou.

    ${logtail_image}

    The address of the Logtail image.

    ${shared_volume_name}

    The name of the volume. You can specify a name based on your business requirements.

    Important

    The value of the name parameter in the volumeMounts node and the value of the name parameter in the volumes node must be the same. This ensures that the same volume is mounted to the Logtail container and the application container.

    ${dir_containing_your_files}

    The mount path. Specify the directory of container text logs that you want to collect.

    Example

    apiVersion: batch/v1
    kind: Job
    metadata:
      # Add Job metadata, such as the name and namespace.
      name: nginx-log-sidecar-demo
      namespace: default
    spec:
      template:
        metadata:
          # Add pod metadata, such as labels.
          labels:
            app: nginx-logger
        spec:
          restartPolicy: Never
          containers:
            # Configure settings for an application container.
            - name: nginx
              image: nginx-test
              command: ["/bin/sh", "-c"]
              args:
                - until [[ -f /tasksite/cornerstone ]]; do sleep 1; done;
                  # Replace the command variable with the actual startup command of the application container.
                  nginx -g 'daemon off;';
                  retcode=$?;
                  touch /tasksite/tombstone;
                  exit $retcode
              volumeMounts:
                # Mount the log directory of the application container to the shared volume.
                - name: nginx-logs
                  mountPath: /var/log/nginx
                # Create a mount target to interact with the Logtail container.
                - mountPath: /tasksite
                  name: tasksite
              # Define resource requests and limits for the application container.
              resources:
                limits:
                  cpu: 500m
                  memory: 512Mi
                requests:
                  cpu: 10m
                  memory: 30Mi
            # Configure settings for the Logtail container, which is a sidecar container.
            - name: logtail
              image: registry.cn-hangzhou.aliyuncs.com/log-service/logtail:v1.5.1.0-aliyun
              command: ["/bin/sh", "-c"]
              args:
                - /etc/init.d/ilogtaild start;
                  sleep 10; # Wait until the Logtail configuration is downloaded.
                  touch /tasksite/cornerstone;
                  until [[ -f /tasksite/tombstone ]]; do sleep 1; done;
                  sleep 10; # Wait until Logtail finishes sending logs.
                  /etc/init.d/ilogtaild stop;
              livenessProbe:
                exec:
                  command:
                    - /etc/init.d/ilogtaild
                    - status
                initialDelaySeconds: 30
                periodSeconds: 30
              resources:
                limits:
                  cpu: 500m
                  memory: 512Mi
                requests:
                  cpu: 10m
                  memory: 30Mi
              env:
                # Specify a time zone. Specify the time zone in the format of Region/City based on the region where the Kubernetes cluster resides. For example, if your cluster resides in the Chinese mainland, set the time zone to Asia/Shanghai. 
                # If the specified time zone is invalid, the time labels of raw logs and processed logs may not match. As a result, logs may be archived based on an incorrect point in time. 
                - name: TZ   
                  value: "Asia/Shanghai"
                # Replace the environment variables with the actual values.
                - name: ALIYUN_LOGTAIL_USER_ID
                  value: "20*******28"
                - name: ALIYUN_LOGTAIL_USER_DEFINED_ID
                  value: "nginx-log-sidecar"
                - name: ALIYUN_LOGTAIL_CONFIG
                  value: "/etc/ilogtail/conf/cn-hangzhou-internet/ilogtail_config.json"
                # Specify the pod environment information as log labels.
                - name: "ALIYUN_LOG_ENV_TAGS"
                  value: "_pod_name_|_pod_ip_|_namespace_|_node_name_|_node_ip_"
                # Obtain the pod and node information.
                - name: "_pod_name_"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: "_pod_ip_"
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: "_namespace_"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: "_node_name_"
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                - name: "_node_ip_"
                  valueFrom:
                    fieldRef:
                      fieldPath: status.hostIP
              volumeMounts:
                # Mount the log directory of the Logtail container to the shared volume.
                - name: nginx-logs
                  mountPath: /var/log/nginx
                # Create a mount target to interact with the application container.
                - mountPath: /tasksite
                  name: tasksite
          volumes:
            # Define a shared volume that is empty for log storage.
            - name: nginx-logs
              emptyDir: {}
            # Define a volume for containers to communicate with each other.
            - name: tasksite
              emptyDir:
                medium: Memory
    
  4. Run the following command to apply the configurations in the sidecar.yaml file.

    In the following command, sidecar.yaml is a sample file name. You can specify a different file name based on your business requirements.

    kubectl apply -f sidecar.yaml

Create a Logtail configuration

Warning

If you use a CustomResourceDefinition (CRD) to create a Logtail configuration and modify the Logtail configuration in the Simple Log Service console, the modification is not synchronized to the CRD. If you want to modify a Logtail configuration that is created by using a CRD, you must modify the CRD. If you modify the configuration in the Simple Log Service console, Logtail configuration inconsistency may occur.

Console

  1. Log on to the Simple Log Service console.

  2. In the Quick Data Import section, click Import Data. In the Import Data dialog box, click the Kubernetes - File card.

    image

  3. Select the required project and Logstore. Then, click Next. In this example, select the project that you use to install the Logtail components and the Logstore that you create.

  4. In the Machine Group Configurations step, perform the following operations:

    1. Use one of the following settings based on your business requirements:

      • Kubernetes Clusters > ACK Daemonset

      • Kubernetes Clusters > Self-managed Cluster in DaemonSet Mode

        Important

        Subsequent settings vary based on the preceding settings.

    2. Confirm that the required machine groups are added to the Applied Server Groups section. Then, click Next. After you install Logtail components in a Container Service for Kubernetes (ACK) cluster, Simple Log Service automatically creates a machine group named k8s-group-${your_k8s_cluster_id}. You can directly use this machine group.

      Important
  5. Create a Logtail configuration and click Next. Simple Log Service starts to collect logs after the Logtail configuration is created.

    Note

    A Logtail configuration requires up to 3 minutes to take effect.

    Global Configurations

    Parameter

    Description

    Configuration Name

    Enter a name for the Logtail configuration. The name must be unique in a project. After you create the Logtail configuration, you cannot change its name.

    Log Topic Type

    Select a method to generate log topics. For more information, see Log topics.

    • Machine Group Topic: The topics of the machine groups are used as log topics. If you want to distinguish the logs from different machine groups, select this option.

    • File Path Extraction: You must specify a custom regular expression. A part of the file path that matches the regular expression is used as the log topic. If you want to distinguish the logs from different sources, select this option.

    • Custom: You must specify a custom log topic.

    Advanced Parameters

    Optional. Configure the advanced parameters that are related to global configurations. For more information, see CreateLogtailPipelineConfig.

    Input Configurations

    Parameter

    Description

    Logtail Deployment Mode

    Select the deployment mode of Logtail. In this example, Daemonset is selected.

    File Path Type

    Select the type of the file path that you want to use to collect logs. Valid values: Path in Container and Host Path. If a hostPath volume is mounted to a container and you want to collect logs from files based on the mapped file path on the container host, set this parameter to Host Path. In other scenarios, set this parameter to Path in Container.

    File Path

    • If the required container runs on a Linux host, specify a path that starts with a forward slash (/). Example: /apsara/nuwa/**/app.Log.

    • If the required container runs on a Windows host, specify a path that starts with a drive letter. Example: C:\Program Files\Intel\**\*.Log.

    You can specify an exact directory and an exact name. You can also use wildcard characters to specify the directory and name. For more information, see Wildcard matching. When you configure this parameter, you can use only asterisks (*) or question marks (?) as wildcard characters.

    Simple Log Service scans all levels of the specified directory for the log files that match specified conditions. Examples:

    • If you specify /apsara/nuwa/**/*.log, Simple Log Service collects logs from the log files whose names are suffixed by .log in the /apsara/nuwa directory and the recursive subdirectories of the directory.

    • If you specify /var/logs/app_*/**/*.log, Simple Log Service collects logs from the log files that meet the following conditions: The file name is suffixed by .log. The file is stored in a subdirectory under the /var/logs directory or in a recursive subdirectory of the subdirectory. The name of the subdirectory matches the app_* pattern.

    • If you specify /var/log/nginx/**/access*, Simple Log Service collects logs from the log files whose names start with access in the /var/log/nginx directory and the recursive subdirectories of the directory.

    Maximum Directory Monitoring Depth

    Specify the maximum number of levels of subdirectories that you want to monitor. The subdirectories are in the log file directory that you specify. This parameter specifies the levels of subdirectories that can be matched for the wildcard characters ** included in the value of File Path. A value of 0 specifies that only the log file directory that you specify is monitored.

    Warning

    We recommend that you configure this parameter based on the minimum requirement. If you specify a large value, Logtail may consume more monitoring resources and cause collection latency.

    Enable Container Metadata Preview

    If you turn on Enable Container Metadata Preview, you can view the container metadata after you create the Logtail configuration, including the matched container information and full container information.

    Container Filtering

    • Logtail version

      • If the version of Logtail is earlier than 1.0.34, you can use only environment variables and container labels to filter containers.

      • If the version of Logtail is 1.0.34 or later, we recommend that you use different levels of Kubernetes information to filter containers. The information includes pod names, namespaces, container names, and container labels.

    • Filter conditions

      Important
      • Container labels are retrieved by running the docker inspect command. Container labels are different from Kubernetes labels. For more information, see Obtain container labels.

      • Environment variables are the same as the environment variables that are configured to start containers. For more information, see Obtain environment variables.

      1. Kubernetes namespaces and container names can be mapped to container labels. The label for a namespace is io.kubernetes.pod.namespace. The label for a container name is io.kubernetes.container.name. We recommend that you use the two labels to filter containers. For example, the namespace of a pod is backend-prod, and the name of a container in the pod is worker-server. If you want to collect the logs of the worker-server container, you can specify io.kubernetes.pod.namespace : backend-prod or io.kubernetes.container.name : worker-server in the container label whitelist.

      2. If the two labels do not meet your business requirements, you can use the environment variable whitelist or the environment variable blacklist to filter containers.

    • K8s Pod Name Regular Matching

      Enter the pod name. The pod name specifies the containers from which text logs are collected. Regular expression matching is supported. For example, if you specify ^(nginx-log-demo.*)$, all containers in the pod whose name starts with nginx-log-demo are matched.

    • K8s Namespace Regular Matching

      Enter the namespace name. The namespace name specifies the containers from which text logs are collected. Regular expression matching is supported. For example, if you specify ^(default|nginx)$, all containers in the nginx and default namespaces are matched.

    • K8s Container Name Regular Matching

      Enter the container name. The container name specifies the containers from which text logs are collected. Regular expression matching is supported. Kubernetes container names are defined in spec.containers. For example, if you specify ^(container-test)$, all containers whose name is container-test are matched.

    • Container Label Whitelist

      Configure a container label whitelist. The whitelist specifies the containers from which text logs are collected.

      Note

      Do not specify duplicate values for the Label Name parameter. If you specify duplicate values, only one value takes effect.

      • If you specify a value for the Label Name parameter but do not specify a value for the Label Value parameter, containers whose container labels contain the specified label name are matched.

      • If you specify a value for the Label Name and Label Value parameters, containers whose container labels contain the specified Label Name:Label Value are matched.

        By default, string matching is performed for the values of the Label Value parameter. Containers are matched only if the values of the container labels are the same as the values of the Label Value parameter. If you specify a value that starts with a caret (^) and ends with a dollar sign ($) for the Label Value parameter, regular expression matching is performed. For example, if you set the Label Name parameter to app and set the Label Value parameter to ^(test1|test2)$, containers whose container labels contain app:test1 or app:test2 are matched.

      Key-value pairs are evaluated by using the OR operator. If a container has a container label that consists of one of the specified key-value pairs, the container is matched.

    • Container Label Blacklist

      Configure a container label blacklist. The blacklist specifies the containers from which text logs are not collected.

      Note

      Do not specify duplicate values for the Label Name parameter. If you specify duplicate values, only one value takes effect.

      • If you specify a value for the Label Name parameter but do not specify a value for the Label Value parameter, containers whose container labels contain the specified label name are filtered out.

      • If you specify a value for the Label Name and Label Value parameters, containers whose container labels contain the specified Label Name:Label Value are filtered out.

        By default, string matching is performed for the values of the Label Value parameter. Containers are filtered out only if the values of the container labels are the same as the values of the Label Value parameter. If you specify a value that starts with a caret (^) and ends with a dollar sign ($) for the Label Value parameter, regular expression matching is performed. For example, if you set the Label Name parameter to app and set the Label Value parameter to ^(test1|test2)$, containers whose container labels contain app:test1 or app:test2 are filtered out.

      Key-value pairs are evaluated by using the OR operator. If a container has a container label that consists of one of the specified key-value pairs, the container is filtered out.

    • Environment Variable Whitelist

      Configure an environment variable whitelist. The whitelist specifies the containers from which text logs are collected.

      • If you specify a value for the Environment Variable Name parameter but do not specify a value for the Environment Variable Value parameter, containers whose environment variables contain the specified environment variable name are matched.

      • If you specify a value for the Environment Variable Name and Environment Variable Value parameters, containers whose environment variables contain the specified Environment Variable Name:Environment Variable Value are matched.

        By default, string matching is performed for the values of the Environment Variable Value parameter. Containers are matched only if the values of the environment variables are the same as the values of the Environment Variable Value parameter. If you specify a value that starts with a caret (^) and ends with a dollar sign ($) for the Environment Variable Value parameter, regular expression matching is performed. For example, if you set the Environment Variable Name parameter to NGINX_SERVICE_PORT and set the Environment Variable Value parameter to ^(80|6379)$, containers whose port number is 80 or 6379 are matched.

      Key-value pairs are evaluated by using the OR operator. If a container has an environment variable that consists of one of the specified key-value pairs, the container is matched.

    • Environment Variable Blacklist

      Configure an environment variable blacklist. The blacklist specifies the containers from which text logs are not collected.

      • If you specify a value for the Environment Variable Name parameter but do not specify a value for the Environment Variable Value parameter, containers whose environment variables contain the specified environment variable name are filtered out.

      • If you specify a value for the Environment Variable Name and Environment Variable Value parameters, containers whose environment variables contain the specified Environment Variable Name:Environment Variable Value are filtered out.

        By default, string matching is performed for the values of the Environment Variable Value parameter. Containers are filtered out only if the values of the environment variables are the same as the values of the Environment Variable Value parameter. If you specify a value that starts with a caret (^) and ends with a dollar sign ($) for the Environment Variable Value parameter, regular expression matching is performed. For example, if you set the Environment Variable Name parameter to NGINX_SERVICE_PORT and set the Environment Variable Value parameter to ^(80|6379)$, containers whose port number is 80 or 6379 are filtered out.

      Key-value pairs are evaluated by using the OR operator. If a container has an environment variable that consists of one of the specified key-value pairs, the container is filtered out.

    • Kubernetes Pod Label Whitelist

      Configure a Kubernetes pod label whitelist. The whitelist specifies the containers from which text logs are collected.

      • If you specify a value for the Label Name parameter but do not specify a value for the Label Value parameter, containers whose pod labels contain the specified label name are matched.

      • If you specify a value for the Label Name and Label Value parameters, containers whose pod labels contain the specified Label Name:Label Value are matched.

        By default, string matching is performed for the values of the Label Value parameter. Containers are matched only if the values of the pod labels are the same as the values of the Label Value parameter. If you specify a value that starts with a caret (^) and ends with a dollar sign ($), regular expression matching is performed. For example, if you set the Label Name parameter to environment and set the Label Value parameter to ^(dev|pre)$, containers whose pod labels contain environment:dev or environment:pre are matched.

      Key-value pairs are evaluated by using the OR operator. If a container has a pod label that consists of one of the specified key-value pairs, the container is matched.

    • Kubernetes Pod Label Blacklist

      Configure a Kubernetes pod label blacklist. The blacklist specifies the containers from which text logs are not collected.

      • If you specify a value for the Label Name parameter but do not specify a value for the Label Value parameter, containers whose pod labels contain the specified label name are filtered out.

      • If you specify a value for the Label Name and Label Value parameters, containers whose pod labels contain the specified Label Name:Label Value are filtered out.

        By default, string matching is performed for the values of the Label Value parameter. Containers are filtered out only if the values of the pod labels are the same as the values of the Label Value parameter. If you specify a value that starts with a caret (^) and ends with a dollar sign ($) for the Label Value parameter, regular expression matching is performed. For example, if you set the Label Name parameter to environment and set the Label Value parameter to ^(dev|pre)$, containers whose pod labels contain environment:dev or environment:pre are filtered out.

      Key-value pairs are evaluated by using the OR operator. If a container has a pod label that consists of one of the specified key-value pairs, the container is filtered out.

    Log Tag Enrichment

    Specify log tags by using environment variables and pod labels.

    File Encoding

    Select the encoding format of log files.

    First Collection Size

    Specify the size of data that Logtail can collect from a log file the first time Logtail collects logs from the file. The default value of First Collection Size is 1024. Unit: KB.

    • If the file size is less than 1,024 KB, Logtail collects data from the beginning of the file.

    • If the file size is greater than 1,024 KB, Logtail collects the last 1,024 KB of data in the file.

    You can specify First Collection Size based on your business requirements. Valid values: 0 to 10485760. Unit: KB.

    Collection Blacklist

    If you turn on Collection Blacklist, you must configure a blacklist to specify the directories or files that you want Simple Log Service to skip when it collects logs. You can specify exact directories and file names. You can also use wildcard characters to specify directories and file names. When you configure this parameter, you can use only asterisks (*) or question marks (?) as wildcard characters.

    Important
    • If you use wildcard characters to configure File Path and you want to skip some directories in the specified directory, you must configure Collection Blacklist and enter a complete directory.

      For example, if you set File Path to /home/admin/app*/log/*.log and you want to skip all subdirectories in the /home/admin/app1* directory, you must select Directory Blacklist and enter /home/admin/app1*/** in the Directory Name field. If you enter /home/admin/app1*, the blacklist does not take effect.

    • When a blacklist is in use, computational overhead is generated. We recommend that you add up to 10 entries to the blacklist.

    • You cannot specify a directory path that ends with a forward slash (/). For example, if you set the path to /home/admin/dir1/, the directory blacklist does not take effect.

    The following types of blacklists are supported: File Path Blacklist, File Blacklist, and Directory Blacklist.

    File Path Blacklist

    • If you select File Path Blacklist and enter /home/admin/private*.log in the File Path Name field, all files whose names are prefixed by private and suffixed by .log in the /home/admin/ directory are skipped.

    • If you select File Path Blacklist and enter /home/admin/private*/*_inner.log in the File Path Name field, all files whose names are suffixed by _inner.log in the subdirectories whose names are prefixed by private in the /home/admin/ directory are skipped. For example, the /home/admin/private/app_inner.log file is skipped, but the /home/admin/private/app.log file is not skipped.

    File Blacklist

    If you select File Blacklist and enter app_inner.log in the File Name field, all files whose names are app_inner.log are skipped.

    Directory Blacklist

    • If you select Directory Blacklist and enter /home/admin/dir1 in the Directory Name field, all files in the /home/admin/dir1 directory are skipped.

    • If you select Directory Blacklist and enter /home/admin/dir* in the Directory Name field, the files in all subdirectories whose names are prefixed by dir in the /home/admin/ directory are skipped.

    • If you select Directory Blacklist and enter /home/admin/*/dir in the Directory Name field, all files in the dir subdirectory in each second-level subdirectory of the /home/admin/ directory are skipped. For example, the files in the /home/admin/a/dir directory are skipped, but the files in the /home/admin/a/b/dir directory are not skipped.

    Allow File to Be Collected for Multiple Times

    By default, you can use only one Logtail configuration to collect logs from a log file. To use multiple Logtail configurations to collect logs from a log file, turn on Allow File to Be Collected for Multiple Times.

    Advanced Parameters

    You must manually configure specific parameters of a Logtail configuration. For more information, see Create a Logtail pipeline configuration.

    Processor Configurations

    Parameter

    Description

    Log Sample

    Add a sample log that is collected from an actual scenario. You can use the sample log to configure parameters that are related to log processing with ease. You can add multiple sample logs. Make sure that the total length of the logs does not exceed 1,500 characters.

    [2023-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happened
        at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
        at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
        at TestPrintStackTrace.main(TestPrintStackTrace.java:16)

    Multi-line Mode

    • Specify the type of multi-line logs. A multi-line log spans multiple consecutive lines. You can configure this parameter to identify each multi-line log in a log file.

      • Custom: A multi-line log is identified based on the value of Regex to Match First Line.

      • Multi-line JSON: Each JSON object is expanded into multiple lines. Example:

        {
          "name": "John Doe",
          "age": 30,
          "address": {
            "city": "New York",
            "country": "USA"
          }
        }
    • Configure Processing Method If Splitting Fails.

      Exception in thread "main" java.lang.NullPointerException
          at com.example.MyClass.methodA(MyClass.java:12)
          at com.example.MyClass.methodB(MyClass.java:34)
          at com.example.MyClass.main(MyClass.java:½0)

      For the preceding sample log, Simple Log Service can discard the log or retain each single line as a log when it fails to split the log.

      • Discard: The log is discarded.

      • Retain Single Line: Each line of log text is retained as a log. Four logs are retained.

    Processing Method

    Select Processors. You can add native plug-ins and extended plug-ins for data processing. For more information about Logtail plug-ins for data processing, see Logtail plug-ins overview.

    Important

    You are subject to the limits of Logtail plug-ins for data processing. For more information, see the on-screen instructions in the Simple Log Service console.

    • Logtail earlier than V2.0

      • You cannot add native plug-ins and extended plug-ins at the same time.

      • You can use native plug-ins only to collect text logs. When you add native plug-ins, take note of the following items:

        • You must add one of the following Logtail plug-ins for data processing as the first plug-in: Data Parsing (Regex Mode), Data Parsing (Delimiter Mode), Data Parsing (JSON Mode), Data Parsing (NGINX Mode), Data Parsing (Apache Mode), and Data Parsing (IIS Mode).

        • After you add the first plug-in, you can add one Time Parsing plug-in, one Data Filtering plug-in, and multiple Data Masking plug-ins.

      • You can add extended plug-ins only after you add native plug-ins.

    • Logtail V2.0

      • You can arbitrarily combine native plug-ins for data processing.

      • You can combine native plug-ins and extended plug-ins. Make sure that extended plug-ins are added after native plug-ins.

  6. Create indexes and preview data. Then, click Next. By default, full-text indexing is enabled in Simple Log Service. You can also configure field indexes based on collected logs in manual mode or automatic mode. To configure field indexes in automatic mode, click Automatic Index Generation. This way, Simple Log Service automatically creates field indexes. For more information, see Create indexes.

    Important

    If you want to query all fields in logs, we recommend that you use full-text indexes. If you want to query only specific fields, we recommend that you use field indexes. This helps reduce index traffic. If you want to analyze fields, you must create field indexes. You must include a SELECT statement in your query statement for analysis.

  7. Click Query Log. Then, you are redirected to the query and analysis page of your Logstore.

    You must wait approximately 1 minute for the indexes to take effect. Then, you can view the collected logs on the Raw Logs tab. For more information, see Query and analyze logs.

(Recommended) CRD - AliyunPipelineConfig

Create a Logtail configuration

Important

Only the Logtail components V0.5.1 or later support AliyunPipelineConfig.

To create a Logtail configuration, you need to only create a CR from the AliyunPipelineConfig CRD. After the Logtail configuration is created, it is automatically applied. If you want to modify a Logtail configuration that is created based on a CR, you must modify the CR.

  1. Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  2. Run the following command to create a YAML file.

    In the following command, cube.yaml is a sample file name. You can specify a different file name based on your business requirements.

    vim cube.yaml
  3. Enter the following script in the YAML file and configure the parameters based on your business requirements.

    Important
    • The value of the configName parameter must be unique in the Simple Log Service project that you use to install the Logtail components.

    • You must configure a CR for each Logtail configuration. If multiple CRs are associated with the same Logtail configuration, the CRs other than the first CR do not take effect.

    • For more information about the parameters related to the AliyunPipelineConfig CRD, see (Recommended) Use AliyunPipelineConfig to manage a Logtail configuration. In this example, the Logtail configuration includes settings for text log collection. For more information, see CreateLogtailPipelineConfig.

    • Make sure that the Logstore specified by the config.flushers.Logstore parameter exists. You can configure the spec.logstore parameter to automatically create a Logstore.

    Collect single-line text logs from specific containers

    In this example, a Logtail configuration named example-k8s-file is created to collect single-line text logs from the containers whose names contain app in a cluster. The file is test.LOG, and the path is /data/logs/app_1.

    The collected logs are stored in a Logstore named k8s-file, which belongs to a project named k8s-log-test.

    apiVersion: telemetry.alibabacloud.com/v1alpha1
    # Create a CR from the ClusterAliyunPipelineConfig CRD.
    kind: ClusterAliyunPipelineConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. The name is the same as the name of the Logtail configuration that is created.
      name: example-k8s-file
    spec:
      # Specify the project to which logs are collected.
      project:
        name: k8s-log-test
      # Create a Logstore to store logs.
      logstores:
        - name: k8s-file
      # Configure the parameters for the Logtail configuration.
      config:
        # Configure the Logtail input plug-ins.
        inputs:
          # Use the input_file plug-in to collect text logs from containers.
          - Type: input_file
            # Specify the file path in the containers.
            FilePaths:
              - /data/logs/app_1/**/test.LOG
            # Enable the container discovery feature. 
            EnableContainerDiscovery: true
            # Add conditions to filter containers. Multiple conditions are evaluated by using a logical AND. 
            ContainerFilters:
              # Specify the namespace of the pod to which the required containers belong. Regular expression matching is supported. 
              K8sNamespaceRegex: default
              # Specify the name of the required containers. Regular expression matching is supported. 
              K8sContainerRegex: ^(.*app.*)$
        # Configure the Logtail output plug-ins.
        flushers:
          # Use the flusher_sls plug-in to send logs to a specific Logstore. 
          - Type: flusher_sls
            # Make sure that the Logstore exists.
            Logstore: k8s-file
            # Make sure that the endpoint is valid.
            Endpoint: cn-hangzhou.log.aliyuncs.com
            Region: cn-hangzhou
            TelemetryType: logs

    Collect multi-line text logs from all containers and use regular expressions to parse the logs

    In this example, a Logtail configuration named example-k8s-file is created to collect multi-line text logs from all containers in a cluster. The file is test.LOG, and the path is /data/logs/app_1. The collected logs are parsed in JSON mode and stored in a Logstore named k8s-file, which belongs to a project named k8s-log-test.

    The sample log provided in the following example is read by the input_file plug-in in the {"content": "2024-06-19 16:35:00 INFO test log\nline-1\nline-2\nend"} format. Then, the log is parsed based on a regular expression into {"time": "2024-06-19 16:35:00", "level": "INFO", "msg": "test log\nline-1\nline-2\nend"}.

    apiVersion: telemetry.alibabacloud.com/v1alpha1
    # Create a CR from the ClusterAliyunPipelineConfig CRD.
    kind: ClusterAliyunPipelineConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. The name is the same as the name of the Logtail configuration that is created.
      name: example-k8s-file
    spec:
      # Specify the project to which logs are collected.
      project:
        name: k8s-log-test
      # Create a Logstore to store logs.
      logstores:
        - name: k8s-file
      # Configure the parameters for the Logtail configuration.
      config:
        # Specify the sample log. You can leave this parameter empty.
        sample: |
          2024-06-19 16:35:00 INFO test log
          line-1
          line-2
          end
        # Configure the Logtail input plug-ins.
        inputs:
          # Use the input_file plug-in to collect multi-line text logs from containers.
          - Type: input_file
            # Specify the file path in the containers.
            FilePaths:
              - /data/logs/app_1/**/test.LOG
            # Enable the container discovery feature. 
            EnableContainerDiscovery: true
            # Enable multi-line log collection.
            Multiline:
              # Specify the custom mode to match the beginning of the first line of a log based on a regular expression.
              Mode: custom
              # Specify the regular expression that is used to match the beginning of the first line of a log.
              StartPattern: \d+-\d+-\d+.*
        # Specify the Logtail processing plug-ins.
        processors:
          # Use the processor_parse_regex_native plug-in to parse logs based on the specified regular expression.
          - Type: processor_parse_regex_native
            # Specify the name of the input field.
            SourceKey: content
            # Specify the regular expression that is used for the parsing. Use capturing groups to extract fields.
            Regex: (\d+-\d+-\d+\s*\d+:\d+:\d+)\s*(\S+)\s*(.*)
            # Specify the fields that you want to extract.
            Keys: ["time", "level", "msg"]
        # Configure the Logtail output plug-ins.
        flushers:
          # Use the flusher_sls plug-in to send logs to a specific Logstore. 
          - Type: flusher_sls
            # Make sure that the Logstore exists.
            Logstore: k8s-file
            # Make sure that the endpoint is valid.
            Endpoint: cn-hangzhou.log.aliyuncs.com
            Region: cn-hangzhou
            TelemetryType: logs
  4. Run the following command to apply the Logtail configuration. After the Logtail configuration is applied, Logtail starts to collect text logs from the specified containers and send the logs to Simple Log Service.

    In the following command, cube.yaml is a sample file name. You can specify a different file name based on your business requirements.

    kubectl apply -f cube.yaml
    Important

    After logs are collected, you must create indexes. Then, you can query and analyze the logs in the Logstore. For more information, see Create indexes.

CRD - AliyunLogConfig

To create a Logtail configuration, you need to only create a CR from the AliyunLogConfig CRD. After the Logtail configuration is created, it is automatically applied. If you want to modify a Logtail configuration that is created based on a CR, you must modify the CR.

  1. Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  2. Run the following command to create a YAML file.

    In the following command, cube.yaml is a sample file name. You can specify a different file name based on your business requirements.

    vim cube.yaml
  3. Enter the following script in the YAML file and configure the parameters based on your business requirements.

    Important
    • The value of the configName parameter must be unique in the Simple Log Service project that you use to install the Logtail components.

    • If multiple CRs are associated with the same Logtail configuration, the Logtail configuration is affected when you delete or modify one of the CRs. After a CR is deleted or modified, the status of other associated CRs becomes inconsistent with the status of the Logtail configuration in Simple Log Service.

    • For more information about CR parameters, see Use AliyunLogConfig to manage a Logtail configuration. In this example, the Logtail configuration includes settings for text log collection. For more information, see CreateConfig.

    Collect single-line text logs from specific containers

    In this example, a Logtail configuration named example-k8s-file is created to collect single-line text logs from the containers of all the pods whose names begin with app in the cluster. The file is test.LOG, and the path is /data/logs/app_1. The collected logs are stored in a Logstore named k8s-file, which belongs to a project named k8s-log-test.

    apiVersion: log.alibabacloud.com/v1alpha1
    kind: AliyunLogConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. 
      name: example-k8s-file
      namespace: kube-system
    spec:
      # Specify the name of the project. If you leave this parameter empty, the project named k8s-log-<your_cluster_id> is used.
      project: k8s-log-test
      # Specify the name of the Logstore. If the specified Logstore does not exist, Simple Log Service automatically creates a Logstore. 
      logstore: k8s-file
      # Configure the parameters for the Logtail configuration. 
      logtailConfig:
        # Specify the type of the data source. If you want to collect text logs, set the value to file. 
        inputType: file
        # Specify the name of the Logtail configuration. 
        configName: example-k8s-file
        inputDetail:
          # Specify the simple mode to collect text logs. 
          logType: common_reg_log
          # Specify the log file path. 
          logPath: /data/logs/app_1
          # Specify the log file name. You can use wildcard characters (* and ?) when you specify the log file name. Example: log_*.log. 
          filePattern: test.LOG
          # Set the value to true if you want to collect text logs from containers. 
          dockerFile: true
          # Specify conditions to filter containers.
          advanced:
            k8s:
              K8sPodRegex: '^(app.*)$'
  4. Run the following command to apply the Logtail configuration. After the Logtail configuration is applied, Logtail starts to collect text logs from the specified containers and send the logs to Simple Log Service.

    In the following command, cube.yaml is a sample file name. You can specify a different file name based on your business requirements.

    kubectl apply -f cube.yaml
    Important

    After logs are collected, you must create indexes. Then, you can query and analyze the logs in the Logstore. For more information, see Create indexes.

View Logtail configurations

Console

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the target project.

  3. Choose Log Storage > Logstores. Click the > icon of the target Logtail configurations. Choose Data Import > Logtail Configurations.

  4. Click the target Logtail configurations to view the details of the configurations.

(Recommended) CRD - AliyunPipelineConfig

View all Logtail configurations created by using the AliyunPipelineConfig CRD

You can run the kubectl get clusteraliyunpipelineconfigs command to view the Logtail configurations.

View the details of a Logtail configuration created by using the AliyunPipelineConfig CRD

You can run the following command to view the details of the Logtail configuration. In the following command, <config_name> specifies the name of the required CR that is created from the AliyunPipelineConfig CRD. You can specify the value based on your business requirements.

kubectl get clusteraliyunpipelineconfigs <config_name> -o yaml

The following sample code provides the sample output. In this example, the Logtail configuration is created based on the CR created in Collect single-line text logs from specific containers. You can view the status parameter to check whether the Logtail configuration is applied.

apiVersion: telemetry.alibabacloud.com/v1alpha1
kind: ClusterAliyunPipelineConfig
metadata:
  finalizers:
    - finalizer.pipeline.alibabacloud.com
  name: example-k8s-file
# The expected configuration.
spec:
  config:
    flushers:
      - Endpoint: cn-hangzhou.log.aliyuncs.com
        Logstore: k8s-file
        Region: cn-hangzhou
        TelemetryType: logs
        Type: flusher_sls
    inputs:
      - EnableContainerDiscovery: true
        FilePaths:
          - /data/logs/app_1/**/test.LOG
        Type: input_file
  logstores:
    - encryptConf: {}
      name: k8s-file
  project:
    name: k8s-log-clusterid
# The application status of the CR.
status:
  # Whether the CR is successfully applied.
  success: true
  # The status information of the CR.
  message: success
  # The update time of the current status.
  lastUpdateTime: '2024-06-19T09:21:34.215702958Z'
  # The Logtail configuration that was successfully applied. Default values are used in the Logtail configuration.
  lastAppliedConfig:
    # The time when the Logtail configuration was applied.
    appliedTime: '2024-06-19T09:21:34.215702958Z'
    # The detailed settings of the Logtail configuration.
    config:
      configTags:
        sls.crd.cluster: e2e-cluster-id
        sls.crd.kind: ClusterAliyunPipelineConfig
        sls.logtail.channel: CRD
      flushers:
        - Endpoint: cn-hangzhou.log.aliyuncs.com
          Logstore: k8s-file
          Region: cn-hangzhou
          TelemetryType: logs
          Type: flusher_sls
      inputs:
        - EnableContainerDiscovery: true
          FilePaths:
            - /data/logs/app_1/**/test.LOG
          Type: input_file
      name: example-k8s-file
    logstores:
      - appendMeta: true
        autoSplit: true
        encryptConf: {}
        maxSplitShard: 64
        name: k8s-file
        shardCount: 2
        ttl: 30
    machineGroups:
      - name: k8s-group-clusterid
    project:
      description: 'k8s log project, created by alibaba cloud log controller'
      endpoint: cn-hangzhou.log.aliyuncs.com
      name: k8s-log-clusterid

CRD - AliyunLogConfig

View all Logtail configurations created by using the AliyunLogConfig CRD

You can run the kubectl get aliyunlogconfigs command to view the Logtail configurations. The following figure shows the sample output.

image.png

View the details of a Logtail configuration created by using the AliyunLogConfig CRD

You can run the kubectl get aliyunlogconfigs <config_name> -o yaml command to view the details of the Logtail configuration. In the following command, <config_name> specifies the name of the required CR that is created from the AliyunLogConfig CRD. You can specify the value based on your business requirements. The following figure shows the sample output.

The status and statusCode parameters in the output indicate the status of the Logtail configuration.

  • If the value of the statusCode parameter is 200, the Logtail configuration is applied.

  • If the value of the statusCode parameter is not 200, the Logtail configuration fails to be applied.

image.png

Query and analyze the collected logs

  1. In the Projects section of the Simple Log Service console, click the project that you want to manage to go to the details page of the project.

    image

  2. Find the Logstore that you want to manage, move the pointer over the Logstore, click the 图标 icon, and then select Search & Analysis to view the logs of your Kubernetes cluster.

    image

Troubleshooting

If an exception occurs when you use Logtail to collect logs from containers, such as standard containers and Kubernetes containers, you can troubleshoot the issue based on the following topic:

What do I do if an error occurs when I use Logtail to collect logs from containers?