Collect text logs from Kubernetes containers in Sidecar mode

Updated at: 2025-03-24 01:43
important

This topic contains important information on necessary precautions. We recommend that you read this topic carefully before proceeding.

If you want to use a separate Logtail process to collect logs from all containers in a pod, you can install Logtail in a Kubernetes cluster in Sidecar mode. This topic describes the implementation, limits, prerequisites, and procedure of collecting container text logs in Sidecar mode.

Implementation

image

Sidecar mode

  • In Sidecar mode, each pod runs a Logtail container. You can use Logtail to collect logs from all containers in the pod. Log collection from each pod is isolated.

  • To ensure that Logtail can collect logs from other containers in a pod, make sure that the Logtail container and application containers share the same volume. For more information about how to collect container logs in Sidecar mode, see Sidecar container with a logging agent and Pods with multiple containers. For more information about volumes, see Storage basics.

Prerequisites

  • Ports 80 (HTTP) and 443 (HTTPS) for outbound traffic are enabled for the server on which Logtail is installed. If the server is an Elastic Computing Service (ECS) instance, you can reconfigure the related security group rules to enable the ports. For more information about how to configure a security group rule, see Add a security group rule.

  • The kubectl command-line tool is installed in your Kubernetes cluster. For more information, see kubectl.

Usage notes

  • Logs are continuously generated in the container from which you want to collect logs. Logtail collects only incremental logs. If a log file on your server is not updated after a Logtail configuration is delivered and applied to the server, Logtail does not collect logs from the file. For more information, see Read log files.

  • The files from which you want to collect logs are stored in the volume that is mounted to the required Logtail container.

Step 1: Inject a Logtail container into a business pod

  1. Log on to your Kubernetes cluster.

  2. Create a YAML file. In the following command, sidecar.yaml is a sample file name. You can specify a different file name based on your business requirements.

    vim sidecar.yaml
  3. Enter the following script in the YAML file and configure the parameters based on your business requirements.

    Warning

    In the following YAML template, replace all placeholders in the ${} format with actual values. Do not modify or delete other parameters.

    YAML template

    apiVersion: batch/v1
    kind: Job
    metadata:
      # Add Job metadata, such as the name and namespace.
      name: ${job_name}
      namespace: ${namespace}
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
            # Configure settings for an application container.
            - name: ${main_container_name}
              image: ${main_container_image}
              command: ["/bin/sh", "-c"]
              args:
                - until [[ -f /tasksite/cornerstone ]]; do sleep 1; done;
                  # Replace the command variable with the actual startup command of the application container.
                  ${container_start_cmd};
                  retcode=$?;
                  touch /tasksite/tombstone;
                  exit $retcode
              volumeMounts:
                # Mount the log directory of the application container to the shared volume.
                - name: ${shared_volume_name}
                  mountPath: ${dir_containing_your_files}
                # Create a mount target to interact with the Logtail container.
                - mountPath: /tasksite
                  name: tasksite
    
            # Configure settings for the Logtail container, which is a sidecar container.
            - name: logtail
              image: ${logtail_image}
              command: ["/bin/sh", "-c"]
              args:
                - /etc/init.d/ilogtaild start;
                  sleep 10; # Wait until the Logtail configuration is downloaded.
                  touch /tasksite/cornerstone;
                  until [[ -f /tasksite/tombstone ]]; do sleep 1; done;
                  sleep 10; # Wait until Logtail finishes sending logs.
                  /etc/init.d/ilogtaild stop;
              livenessProbe:
                exec:
                  command:
                    - /etc/init.d/ilogtaild
                    - status
                initialDelaySeconds: 30
                periodSeconds: 30
              env:
                # Specify a time zone. Specify the time zone in the Region/City format based on the region where the Kubernetes cluster resides. For example, if your cluster resides in the Chinese mainland, set the time zone to Asia/Shanghai. 
                # If the specified time zone is invalid, the time labels of raw logs and processed logs may not match. As a result, logs may be archived based on an incorrect point in time. 
                - name: TZ
                  value: "${timezone}"
                - name: ALIYUN_LOGTAIL_USER_ID
                  value: "${your_aliyun_user_id}"
                - name: ALIYUN_LOGTAIL_USER_DEFINED_ID
                  value: "${your_machine_group_user_defined_id}"
                - name: ALIYUN_LOGTAIL_CONFIG
                  value: "/etc/ilogtail/conf/${your_region_config}/ilogtail_config.json"
                # Specify the pod environment information as log labels.
                - name: "ALIYUN_LOG_ENV_TAGS"
                  value: "_pod_name_|_pod_ip_|_namespace_|_node_name_|_node_ip_"
                # Obtain the pod and node information.
                - name: "_pod_name_"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: "_pod_ip_"
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: "_namespace_"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: "_node_name_"
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                - name: "_node_ip_"
                  valueFrom:
                    fieldRef:
                      fieldPath: status.hostIP
              volumeMounts:
                # Mount the log directory of the Logtail container to the shared volume.
                - name: ${shared_volume_name}
                  mountPath: ${dir_containing_your_files}
                # Create a mount target to interact with the application container.
                - mountPath: /tasksite
                  name: tasksite
                  # Specify a time zone that is the same as the time zone of your host.
                - name: tz-config
                  mountPath: /etc/localtime
                  readOnly: true
          volumes:
            # Define an empty shared volume for log storage.
            - name: ${shared_volume_name}
              emptyDir: {}
            # Define a volume for containers to communicate with each other.
            - name: tasksite
              emptyDir:
                medium: Memory
            - name: tz-config
              hostPath:
                path: /usr/share/zoneinfo/Asia/Shanghai  # Specify the time zone file on your host.
    
    

    Key parameters

    Variable

    Description

    ${timezone}

    The time zone of the container. For more information, see Time zones.

    ${your_aliyun_user_id}

    The ID of your Alibaba Cloud account. For more information, see Configure a user identifier.

    ${your_machine_group_user_defined_id}

    The custom identifier of your machine group. You can create a machine group based on the custom identifier. Example: nginx-log-sidecar.

    Important

    The custom identifier must be unique in the region where your project resides.

    ${your_region_config}

    The region ID and network type of your project. For more information about regions, see Supported regions.

    • If logs are collected to your project over the Internet, specify the value in the region-internet format. For example, if your project resides in the China (Hangzhou) region, specify cn-hangzhou-internet.

    • If logs are collected to your project over an internal network of Alibaba Cloud, specify the value in the region format. For example, if your project resides in the China (Hangzhou) region, specify cn-hangzhou.

    ${logtail_image}

    The address of the Logtail image. Example: registry.cn-hangzhou.aliyuncs.com/log-service/logtail:latest.

    ${shared_volume_name}

    The name of the volume. You can specify a name based on your business requirements.

    Important

    The value of the name parameter in the volumeMounts node and the value of the name parameter in the volumes node must be the same. This ensures that the same volume is mounted to the Logtail container and the application container.

    ${dir_containing_your_files}

    The mount path. Specify the directory of the container text logs that you want to collect.

    Example

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        deployment.kubernetes.io/revision: '1'
      labels:
        app: deployment-file
        cluster_label: CLUSTER-LABEL-A
      name: deployment-file
      namespace: default
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: deployment-file
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate
      template:
        metadata:
          labels:
            app: deployment-file
            cluster_label: CLUSTER-LABEL-A
        spec:
          containers:
            - name: timestamp-test
              image: 'mirrors-ssl.aliyuncs.com/busybox:latest'
              args:
                - >-
                  while true; mkdir -p /root/log; do date '+%Y-%m-%d %H:%M:%S'
                  >>/root/log/timestamp.log; echo 1 >>/root/log/timestamp.log; echo
                  2 >>/root/log/timestamp.log; echo 3 >>/root/log/timestamp.log;
                  echo 4 >>/root/log/timestamp.log; echo 5
                  >>/root/log/timestamp.log; echo 6 >>/root/log/timestamp.log;
                  echo 7 >>/root/log/timestamp.log; echo 8
                  >>/root/log/timestamp.log; echo 9 >>/root/log/timestamp.log;
                  sleep 10; done
              command:
                - /bin/sh
                - '-c'
                - '--'
              env:
                - name: cluster_id
                  value: CLUSTER-A
              imagePullPolicy: IfNotPresent
              resources: {}
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
                # Mount the log directory of the application container to the shared volume.
                - name: test-logs
                  mountPath: /root/log
                 # Create a mount target to interact with the Logtail container.
                - mountPath: /tasksite
                  name: tasksite
                - name: tz-config
                  mountPath: /etc/localtime
                  readOnly: true
            # Configure settings for the Logtail container, which is a sidecar container.
            - name: logtail
              image: registry.cn-hangzhou.aliyuncs.com/log-service/logtail:v1.8.7.0-aliyun
              command: ["/bin/sh", "-c"]
              args:
                - /etc/init.d/ilogtaild start;
                  sleep 10;
                  touch /tasksite/cornerstone;
                  until [[ -f /tasksite/tombstone ]]; do sleep 1; done;
                  sleep 10;
                  /etc/init.d/ilogtaild stop;
              livenessProbe:
                exec:
                  command:
                    - /etc/init.d/ilogtaild
                    - status
                initialDelaySeconds: 30
                periodSeconds: 30
              resources:
                limits:
                  cpu: 500m
                  memory: 512Mi
                requests:
                  cpu: 10m
                  memory: 30Mi
              env:
                # Specify a time zone. Specify the time zone in the Region/City format based on the region where the Kubernetes cluster resides. For example, if your cluster resides in the Chinese mainland, set the time zone to Asia/Shanghai. 
                # If the specified time zone is invalid, the time labels of raw logs and processed logs may not match. As a result, logs may be archived based on an incorrect point in time. 
                - name: TZ
                  value: "Asia/Shanghai"
                # Replace the environment variables with actual values.
                - name: ALIYUN_LOGTAIL_USER_ID
                  value: "1290918****39680"
                - name: ALIYUN_LOGTAIL_USER_DEFINED_ID
                  value: "nginx-log-sidecar"
                - name: ALIYUN_LOGTAIL_CONFIG
                  value: "/etc/ilogtail/conf/cn-beijing-internet/ilogtail_config.json"
                # Specify the pod environment information as log labels.
                - name: "ALIYUN_LOG_ENV_TAGS"
                  value: "_pod_name_|_pod_ip_|_namespace_|_node_name_|_node_ip_"
                # Obtain the pod and node information.
                - name: "_pod_name_"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: "_pod_ip_"
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: "_namespace_"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: "_node_name_"
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                - name: "_node_ip_"
                  valueFrom:
                    fieldRef:
                      fieldPath: status.hostIP
              volumeMounts:
                # Mount the log directory of the Logtail container to the shared volume.
                - name: test-logs
                  mountPath: /root/log
          volumes:
            # Define an empty shared volume for log storage.
            - name: test-logs
              emptyDir: {}
              # Define a volume for containers to communicate with each other.
            - name: tasksite
              emptyDir:
                medium: Memory
            - name: tz-config
              hostPath:
                path: /usr/share/zoneinfo/Asia/Shanghai  # Specify the time zone file on your server.
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
  4. Run the following command to apply the configurations in the sidecar.yaml file.

    In the following command, sidecar.yaml is a sample file name. You can specify a different file name based on your business requirements.

    kubectl apply -f sidecar.yaml

Step 2: Create a custom identifier-based machine group

Important

The value of the ALIYUN_LOGTAIL_USER_DEFINED_ID parameter in the YAML file created in Step 1 is the custom identifier.

  1. Log on to the Simple Log Service console. In the Projects section, click the project that you want to manage.

  2. In the left-side navigation pane, choose Resources > Machine Groups. In the Machine Groups list, choose 机器组 > Create Machine Group.image

  3. In the Create Machine Group panel, configure parameters and click OK. The following table describes the parameters.

    Parameter

    Description

    Parameter

    Description

    Name

    The name of the machine group. The name must meet the following requirements:

    • The name can contain only lowercase letters, digits, hyphens (-), and underscores (_).

    • The name must start and end with a lowercase letter or a digit.

    • The name must be 2 to 128 characters in length.

    Important

    After you create a machine group, you cannot change the name of the machine group. Proceed with caution.

    Machine Group Identifier

    The identifier type of the machine group. Select Custom Identifier.

    Machine Group Topic

    Optional. The topic of the machine group. The topic is used to identify the logs that are generated by different servers. For more information, see Log topics.

    Custom Identifier

    The custom identifier. Enter the configured custom identifier. Example: user-defined-1.

Step 3: Create a Logtail configuration

  1. In the required project, find the Logstore that you want to manage and choose image > Data Collection > image.

  2. In the Quick Data Import dialog box, click the Kubernetes - File card.

    image

  3. Select a project and a Logstore. Then, click Next. In this example, select the project that you use to install the Logtail components and the Logstore that you create.

  4. In the Machine Group Configurations step of the Import Data wizard, perform the following operations. For more information about machine groups, see Introduction to machine groups.

    1. Use one of the following settings based on your business requirements:

      • Kubernetes Clusters > ACK Daemonset

      • Kubernetes Clusters > Self-managed Cluster in DaemonSet Mode

    2. Confirm that the required machine group is added to the Applied Server Groups section. Then, click Next. Select the machine group that you create in Step 2.

  5. Create a Logtail configuration and click Next. Then, Simple Log Service starts to collect logs.

    Note

    Approximately 3 minutes are required to create a Logtail configuration.

    Global Configurations

    Parameter

    Description

    Configuration Name

    The name of the Logtail configuration. The name must be unique in a project. After you create the Logtail configuration, you cannot change the name of the Logtail configuration.

    Log Topic Type

    The method to generate log topics. For more information, see Log topics.

    • Machine Group Topic: The topics of machine groups are used as log topics. If you want to distinguish the logs from different machine groups, select this option.

    • File Path Extraction: You must specify a custom regular expression. A part of a log path that matches the regular expression is used as a log topic. If you want to distinguish the logs from different sources, select this option.

    • Custom: You must specify a custom log topic.

    Advanced Parameters

    Optional. The advanced parameters that are related to global configurations. For more information, see CreateLogtailPipelineConfig.

    Input Configurations

    Parameter

    Description

    Logtail Deployment Mode

    The deployment mode of Logtail. In this example, select Sidecar.

    File Path Type

    The type of the file path that you want to use to collect logs. Valid values: Path in Container and Host Path. If a hostPath volume is mounted to a container and you want to collect logs from files based on the mapped file path on the container host, set this parameter to Host Path. In other scenarios, set this parameter to Path in Container.

    File Path

    • If the required container runs on a Linux host, specify a path that starts with a forward slash (/). Example: /apsara/nuwa/**/app.Log.

    • If the required container runs on a Windows host, specify a path that starts with a drive letter. Example: C:\Program Files\Intel\**\*.Log.

    You can specify an exact directory and an exact name. You can also use wildcard characters to specify the directory and name. For more information, see Wildcard matching. When you configure this parameter, you can use only asterisks (*) or question marks (?) as wildcard characters.

    Simple Log Service scans all levels of the specified directory for the log files that match specified conditions. Examples:

    • If you specify /apsara/nuwa/**/*.log, Simple Log Service collects logs from the log files whose names are suffixed by .log in the /apsara/nuwa directory and the recursive subdirectories of the directory.

    • If you specify /var/logs/app_*/**/*.log, Simple Log Service collects logs from the log files that meet the following conditions: The file name is suffixed by .log. The file is stored in a subdirectory under the /var/logs directory or in a recursive subdirectory of the subdirectory. The name of the subdirectory matches the app_* pattern.

    • If you specify /var/log/nginx/**/access*, Simple Log Service collects logs from the log files whose names start with access in the /var/log/nginx directory and the recursive subdirectories of the directory.

    Maximum Directory Monitoring Depth

    Specify the maximum number of levels of subdirectories that you want to monitor. The subdirectories are in the log file directory that you specify. This parameter specifies the levels of subdirectories that can be matched for the wildcard characters ** included in the value of File Path. A value of 0 specifies that only the log file directory that you specify is monitored.

    Warning

    We recommend that you configure this parameter based on the minimum requirement. If you specify a large value, Logtail may consume more monitoring resources, and collection latency may exist.

    File Encoding

    The encoding format of log files.

    First Collection Size

    Specify the size of data that Logtail can collect from a log file the first time Logtail collects logs from the file. The default value of First Collection Size is 1024. Unit: KB.

    • If the file size is less than 1,024 KB, Logtail collects data from the beginning of the file.

    • If the file size is greater than 1,024 KB, Logtail collects the last 1,024 KB of data in the file.

    You can specify First Collection Size based on your business requirements. Valid values: 0 to 10485760. Unit: KB.

    Collection Blacklist

    If you turn on Collection Blacklist, you must configure a blacklist to specify the directories or files that you want Simple Log Service to skip when it collects logs. You can specify exact directories and file names. You can also use wildcard characters to specify directories and file names. When you configure this parameter, you can use only asterisks (*) or question marks (?) as wildcard characters.

    Important
    • If you use wildcard characters to configure File Path and you want to skip some directories in the specified directory, you must configure Collection Blacklist and enter a complete directory.

      For example, if you set File Path to /home/admin/app*/log/*.log and you want to skip all subdirectories in the /home/admin/app1* directory, you must select Directory Blacklist and enter /home/admin/app1*/** in the Directory Name field. If you enter /home/admin/app1*, the blacklist does not take effect.

    • When a blacklist is in use, computational overhead is generated. We recommend that you add up to 10 entries to the blacklist.

    • You cannot specify a directory path that ends with a forward slash (/). For example, if you set the path to /home/admin/dir1/, the directory blacklist does not take effect.

    The following types of blacklists are supported: File Path Blacklist, File Blacklist, and Directory Blacklist.

    File Path Blacklist
    File Blacklist
    Directory Blacklist
    • If you select File Path Blacklist and enter /home/admin/private*.log in the File Path Name field, all files whose names are prefixed by private and suffixed by .log in the /home/admin/ directory are skipped.

    • If you select File Path Blacklist and enter /home/admin/private*/*_inner.log in the File Path Name field, all files whose names are suffixed by _inner.log in the subdirectories whose names are prefixed by private in the /home/admin/ directory are skipped. For example, the /home/admin/private/app_inner.log file is skipped, but the /home/admin/private/app.log file is not skipped.

    If you select File Blacklist and enter app_inner.log in the File Name field, all files whose names are app_inner.log are skipped.

    • If you select Directory Blacklist and enter /home/admin/dir1 in the Directory Name field, all files in the /home/admin/dir1 directory are skipped.

    • If you select Directory Blacklist and enter /home/admin/dir* in the Directory Name field, the files in all subdirectories whose names are prefixed by dir in the /home/admin/ directory are skipped.

    • If you select Directory Blacklist and enter /home/admin/*/dir in the Directory Name field, all files in the dir subdirectory in each second-level subdirectory of the /home/admin/ directory are skipped. For example, the files in the /home/admin/a/dir directory are skipped, but the files in the /home/admin/a/b/dir directory are not skipped.

    Allow File to Be Collected for Multiple Times

    By default, you can use only one Logtail configuration to collect logs from a log file. To use multiple Logtail configurations to collect logs from a log file, turn on Allow File to Be Collected for Multiple Times.

    Advanced Parameters

    Specifies whether to manually configure specific parameters of a Logtail configuration. For more information, see CreateLogtailPipelineConfig.

    Processor Configurations

    Parameter

    Description

    Log Sample

    Add a sample log that is collected from an actual scenario. You can use the sample log to configure parameters that are related to log processing with ease. You can add multiple sample logs. Make sure that the total length of the logs does not exceed 1,500 characters.

    [2023-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happened
        at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
        at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
        at TestPrintStackTrace.main(TestPrintStackTrace.java:16)

    Multi-line Mode

    • Specify the type of multi-line logs. A multi-line log spans multiple consecutive lines. You can configure this parameter to identify each multi-line log in a log file.

      • Custom: A multi-line log is identified based on the value of Regex to Match First Line.

      • Multi-line JSON: Each JSON object is expanded into multiple lines. Example:

        {
          "name": "John Doe",
          "age": 30,
          "address": {
            "city": "New York",
            "country": "USA"
          }
        }
    • Configure Processing Method If Splitting Fails.

      Exception in thread "main" java.lang.NullPointerException
          at com.example.MyClass.methodA(MyClass.java:12)
          at com.example.MyClass.methodB(MyClass.java:34)
          at com.example.MyClass.main(MyClass.java:½0)

      For the preceding sample log, Simple Log Service can discard the log or retain each single line as a log when it fails to split the log.

      • Discard: The log is discarded.

      • Retain Single Line: Each line of log text is retained as a log. A total of four logs are retained.

    Processing Method

    Select Processors. You can add native plug-ins and extended plug-ins for data processing. For more information about Logtail plug-ins for data processing, see Logtail plug-ins overview.

    Important

    You are subject to the limits of Logtail plug-ins for data processing. For more information, see the on-screen instructions in the Simple Log Service console.

    • Logtail V2.0

      • You can arbitrarily combine native plug-ins for data processing.

      • You can combine native plug-ins and extended plug-ins. Make sure that extended plug-ins are added after native plug-ins.

    • Logtail earlier than V2.0

      • You cannot add native plug-ins and extended plug-ins at the same time.

      • You can use native plug-ins only to collect text logs. When you add native plug-ins, take note of the following items:

        • You must add one of the following Logtail plug-ins for data processing as the first plug-in: Data Parsing (Regex Mode), Data Parsing (Delimiter Mode), Data Parsing (JSON Mode), Data Parsing (NGINX Mode), Data Parsing (Apache Mode), and Data Parsing (IIS Mode).

        • After you add the first plug-in, you can add a Time Parsing plug-in, a Data Filtering plug-in, and multiple Data Masking plug-ins.

      • When you configure the Retain Original Field if Parsing Fails and Retain Original Field if Parsing Succeeds parameters, you can use only the following parameter combinations. For other parameter combinations, Simple Log Service does not ensure configuration effects.

        • Upload logs that are parsed.

          image

        • Upload logs that are obtained after parsing if the parsing is successful, and upload raw logs if the parsing fails.

          image

        • Upload logs that are obtained after parsing and add a raw log field to the logs if the parsing is successful, and upload raw logs if the parsing fails.

          For example, if a raw log is "content": "{"request_method":"GET", "request_time":"200"}" and the raw log is successfully parsed, the system adds a raw log field to the log that is obtained after parsing. The raw log field is specified by the New Name of Original Field parameter. If you do not configure the parameter, the original field name is used. The field value is {"request_method":"GET", "request_time":"200"}.

          image

  6. Create indexes and preview data. Then, click Next. By default, full-text indexing is enabled for Simple Log Service. You can also configure field indexes based on collected logs in manual mode or automatic mode. To configure field indexes in automatic mode, click Automatic Index Generation. This way, Simple Log Service automatically creates field indexes. For more information, see Create indexes.

    If you want to query all fields in logs, we recommend that you use full-text indexes. If you want to query only specific fields, we recommend that you use field indexes. This helps reduce index traffic. If you want to analyze fields, you must create field indexes. You must include a SELECT statement in your query statement for analysis.

  7. Click Query Log. Then, you are navigated to the query and analysis page of the created Logstore.

Step 4: Query and analyze logs

In the End step of the Import Data wizard, click Query Log. Then, you are navigated to the query and analysis page of the created Logstore. The following error message may appear because indexes are not created. After you close the error message page and wait for 1 minute, you can view the collected logs.

image

Enter a query statement in the search box, specify a query time range, and then click Search & Analyze. Then, you can obtain the logs that meet the specified conditions. For more information about the query and analysis syntax, see Query and analyze logs in index mode.

image

References

View Logtail configurations

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the target project.

  3. Choose Log Storage > Logstores. Click the > icon of the target Logtail configurations. Choose Data Import > Logtail Configurations.

  4. Click the target Logtail configurations to view the details of the configurations.

Troubleshooting

  • On this page (1, M)
  • Implementation
  • Sidecar mode
  • Prerequisites
  • Usage notes
  • Step 1: Inject a Logtail container into a business pod
  • Step 2: Create a custom identifier-based machine group
  • Step 3: Create a Logtail configuration
  • Step 4: Query and analyze logs
  • References
  • View Logtail configurations
  • Troubleshooting
Feedback
phone Contact Us

Chat now with Alibaba Cloud Customer Service to assist you in finding the right products and services to meet your needs.

alicare alicarealicarealicare