Simple Log Service offers two deployment methods for Logtail to collect Kubernetes logs: DaemonSet and Sidecar. For an explanation of the differences between these methods, see the Logtail Installation and Collection Guide for Kubernetes Cluster Scenarios. This topic describes how to deploy Logtail using DaemonSet mode for text log collection from Alibaba Cloud ACK clusters.
Prerequisites
-
Activate Simple Log Service. For more information, see Activate Simple Log Service.
Notes
-
This guide is applicable only to ACK managed and dedicated clusters.
-
To collect container application logs from ACK Serverless clusters, refer to Collect Application Logs by Using Pod Environment Variables.
-
For self-managed K8s clusters or Alibaba Cloud ACK clusters and Simple Log Service under different Alibaba Cloud accounts, see Collect Text Logs from Self-managed K8s Clusters (Deploy Logtail in the DaemonSet Mode).
-
Procedure
Deploying Logtail in DaemonSet mode to collect text logs from ACK clusters involves three main steps:
-
Install the Logtail component: Install the necessary Logtail components for your ACK cluster, including DaemonSet logtail-ds, ConfigMap alibaba-log-configuration, Deployment alibaba-log-controller, and others. These components enable Simple Log Service to deliver collection configurations to Logtail and manage log collection tasks.
-
Create Logtail collection configurations: Based on the collection configurations, Logtail collects incremental logs, processes them, and uploads them to Logstore. This topic introduces four methods to create collection configurations: CRD-AliyunPipelineConfig (recommended), CRD-AliyunLogConfig, console, and environment variables.
-
Query and analyze logs: Once configured successfully, a Logstore is automatically created, allowing you to view the log data.
Step 1: Install the Logtail component
Install the Logtail component in an existing ACK cluster
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose .
-
On the Logs And Monitoring tab, find logtail-ds, and then click Install.
Install the Logtail component when creating a new ACK cluster
Log on to the ACK console. In the left-side navigation pane, click Clusters.
-
Click Create Cluster, and on the Component Configuration page, select Use Simple Log Service.
This topic focuses exclusively on configurations pertinent to Simple Log Service. For details on additional configuration items, see Create an ACK Managed Cluster.
Upon selecting Use Simple Log Service, you will be prompted to create a project.
-
Use An Existing Project
Select an existing project to manage the collected container logs.
-
Create A New Project
Simple Log Service will automatically create a project to manage the collected container logs. The
ClusterID
is the unique identifier of your new Kubernetes cluster.
-
On the Component Configuration page, control plane component logs are enabled by Default . This setting automatically configures and collects logs for the control plane components within the project and is subject to pay-as-you-go billing. You can decide whether to activate this feature based on your requirements. For more information, see Manage Control Plane Component Logs.
After installation, Simple Log Service automatically generates a project named k8s-log-${your_k8s_cluster_id}
and creates the following resources within this project.
Resource Type | Resource Name | Purpose | Example |
Machine Group | k8s-group-${your_k8s_cluster_id} | The machine group of logtail-daemonset, which is used in log collection scenarios. | k8s-group-my-cluster-123 |
k8s-group-${your_k8s_cluster_id}-statefulset | The machine group of logtail-statefulset, which is used in metric collection scenarios. | k8s-group-my-cluster-123-statefulset | |
k8s-group-${your_k8s_cluster_id}-singleton | The machine group of a single instance, which is applicable to the Logtail configuration of the single instance. | k8s-group-my-cluster-123-singleton | |
Logstore | config-operation-log | Used to store the logs of the alibaba-log-controller in the Logtail component. It is recommended not to create collection configurations under this Logstore. | N/A |
Do not delete the Logstore named config-operation-log
.
Step 2: Create Logtail collection configurations
This section introduces four methods for creating collection configurations. It is recommended to use only one method to manage Logtail collection configurations:
-
CRD-AliyunPipelineConfig (recommended): Ideal for scenarios where you want to manage log collection configurations through K8s resources. It supports complex data processing logic and allows you to manage collection configurations via CRD, which can be updated and deployed simultaneously with applications to ensure version consistency.
This method requires the logtail-ds component version of ACK to be higher than 1.8.10. For upgrade instructions, see Upgrade Automatically Installed Logtail Components.
-
Simple Log Service Console: A user-friendly graphical interface for direct management, suitable for scenarios where you want to quickly deploy configurations without delving into configuration files. However, some advanced features and custom requirements may not be achievable through the console.
-
Environment Variables: Appropriate for scenarios where you want to quickly configure basic log parameters through environment variables. It allows simple configuration adjustments but does not support complex processing logic.
-
CRD-AliyunLogConfig: An older version of CRD. The newer CRD-AliyunPipelineConfig offers improved extensibility```html and stability compared to the older version and reduces configuration complexity. It is recommended to use the new version. For a detailed comparison of capabilities, see CRD Type Differences.
CRD-AliyunPipelineConfig (recommended)
To create a Logtail collection configuration, simply create the AliyunPipelineConfig custom resource. The resource takes effect automatically after creation.
For Logtail collection configurations created through custom resources, modifications must be made by updating the corresponding custom resource. Changes made to the Logtail collection configuration in the Simple Log Service console will not sync to the custom resource.
-
Log on to the ACK console.
-
In the left-side navigation pane, click Cluster List.
-
On the Cluster List page, click . of the target cluster, and then select the Connection Information tab
-
Click Manage Cluster Through Workbench in the upper-right corner.
-
Create a new YAML file, modify it according to your actual situation, and use the following sample script:
This YAML creates a Logtail collection configuration named
example-k8s-file
and collects thetest.LOG
file content from all containers in the cluster whose names containapp
in multi-line text mode from the path/data/logs/app_1
. The logs are sent to a Logstore namedk8s-file
in a project namedk8s-log-test
.You need to modify the project in the example, such as logging on to Simple Log Service Console, and use the project named
k8s-log-<your_cluster_id>
generated for you when you installed Logtail in Simple Log Service. Modify the FilePaths in the example to use your container's file path. For more information, see Container File Path Mapping. If you want to customize the content of the YAML file, see CR Parameter Description for detailed information about the parameters of the AliyunPipelineConfig custom resource. If you are interested in the details of the Logtail collection configuration provided by the config item in the YAML file, such as supported input, output, processing plug-in types, and container filtering methods, see PipelineConfig.apiVersion: telemetry.alibabacloud.com/v1alpha1 kind: ClusterAliyunPipelineConfig metadata: # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. This name is also the name of the Logtail collection configuration created. If the name is duplicated, it will not take effect. name: example-k8s-file spec: # Specify the target project project: name: k8s-log-test logstores: # Create a Logstore named k8s-file - name: k8s-file # Define the Logtail collection configuration config: # Sample log (optional) sample: | 2024-06-19 16:35:00 INFO test log line-1 line-2 end # Define input plug-ins inputs: # Use the input_file plug-in to collect multi-line text logs from containers - Type: input_file # File path in the container FilePaths: - /data/logs/app_1/**/test.LOG # Enable the container discovery feature. EnableContainerDiscovery: true # Add conditions to filter containers. Multiple conditions are evaluated by using a logical AND. ContainerFilters: # Specify the namespace to which the pod that contains the container to be collected belongs. Regular expression matching is supported. K8sNamespaceRegex: default # Specify the name of the container to be collected. Regular expression matching is supported. K8sContainerRegex: ^(.*app.*)$ # Enable multi-line log collection. Delete this configuration for single-line log collection. Multiline: # Specify the custom mode to match the beginning of the first line of a log based on a regular expression Mode: custom # Configure the regular expression for the beginning of a line StartPattern: \d+-\d+-\d+.* # Define processing plug-ins processors: # Use the processor_parse_regex_native plug-in to parse logs based on the specified regular expression - Type: processor_parse_regex_native # Specify the name of the input field SourceKey: content # Specify the regular expression that is used for the parsing. Use capturing groups to extract fields Regex: (\d+-\d+-\d+\s*\d+:\d+:\d+)\s*(\S+)\s*(.*) # Specify the fields that you want to extract Keys: ["time", "level", "msg"] # Define output plug-ins flushers: # Use the flusher_sls plug-in to send logs to a specific Logstore. - Type: flusher_sls # Make sure that the Logstore exists Logstore: k8s-file # Make sure that the endpoint is valid Endpoint: cn-hangzhou.log.aliyuncs.com Region: cn-hangzhou TelemetryType: logs
-
Execute
kubectl apply -f example.yaml
, whereexample.yaml
is the name of the YAML file you created. Logtail will begin collecting text logs from containers and sending them to Simple Log Service.
CRD-AliyunLogConfig
To create a Logtail collection configuration, simply create the AliyunLogConfig custom resource. The configuration takes effect automatically after creation.
For Logtail collection configurations created through custom resources, modifications must be made by updating the corresponding custom resource. Changes made to the Logtail collection configuration in the Simple Log Service console will not sync to the custom resource.
-
Log on to the ACK console.
-
In the left-side navigation pane, click Cluster List.
-
On the Cluster List page, click . of the target cluster, and then select the Connection Information tab
-
Click Manage Cluster Through Workbench in the upper-right corner.
-
Create a new YAML file in Workbench, modify it according to your actual situation, and use the following sample script:
This YAML script creates a Logtail collection configuration named
example-k8s-file
and collects thetest.LOG
file content from all containers in the cluster whose names start withapp
in simple text mode from the path/data/logs/app_1
. The logs are sent to a Logstore namedk8s-file
in a project namedk8s-log-<your_cluster_id>
.You may need to modify the logPath in the example to your actual log file path. If you want to customize the content of the YAML file, see CR Parameter Description for detailed information about the parameters of the AliyunLogConfig custom resource. If you are interested in the details of the Logtail collection configuration provided by the logtailConfig item in the YAML file, such as supported input, output, processing plug-in types, and container filtering methods, see AliyunLogConfigDetail.
apiVersion: log.alibabacloud.com/v1alpha1 kind: AliyunLogConfig metadata: # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. name: example-k8s-file # Specify the namespace to which the resource belongs. namespace: kube-system spec: # Specify the name of the project. If you leave this parameter empty, the project named k8s-log-<your_cluster_id> is used. # project: k8s-log-test # Specify the name of the Logstore. If the specified Logstore does not exist, Simple Log Service automatically creates a Logstore. logstore: k8s-file # Specify the Logtail collection configuration. logtailConfig: # Specify the type of data source. If you want to collect text logs, set the value to file. inputType: file # Specify the name of the Logtail collection configuration. The name must be the same as metadata.name. configName: example-k8s-file inputDetail: # Specify the simple mode to collect text logs. logType: common_reg_log # Specify the path to the log file. logPath: /data/logs/app_1 # Specify the log file name. You can use wildcard characters such as asterisks (*) and question marks (?). Example: log_*.log. filePattern: test.LOG # If you want to collect container text logs, you must set dockerFile to true. dockerFile: true # Enable multi-line log collection. Delete this configuration for single-line log collection. # Regular expression for the beginning of a line. This regular expression indicates the beginning of a log line. logBeginRegex: \d+-\d+-\d+.* # Specify the conditions that are used to filter containers. advanced: k8s: K8sPodRegex: '^(app.*)$'
-
Execute
kubectl apply -f example.yaml
, whereexample.yaml
is the name of the YAML file you created. Logtail will begin collecting text logs from containers and sending them to Simple Log Service.
Simple Log Service Console
This method is suitable for creating and managing a small number of Logtail collection configurations. You do not need to log on to the Kubernetes cluster. The procedure is simple but does not support batch configuration.
-
Log on to the Simple Log Service Console.
-
Select the project that you used when you installed the Logtail component from the Project list, such as
k8s-log-<your_cluster_id>
. On the project page, click the target Logstore's Logtail Configuration, add a Logtail configuration, and click Kubernetes-file Access Now. -
On the Machine Group Configuration page, select the k8s-group-${your_k8s_cluster_id} machine group under the ACK Daemonset method in the K8s scenario, and click > to add it to the Application Machine Group, and then click Next.
-
Create a Logtail collection configuration. Fill in the required configurations as described below, and then click Next. The Logtail collection configuration takes effect in about 1 minute. Please be patient.
This section describes only the main configurations. For detailed configurations, see Logtail Collection Configuration.
```html-
Global Configuration
Enter the configuration name in Global Configuration.
-
Input Configuration
-
Logtail Deployment Mode: Select DaemonSet.
-
File Path Type: Choose whether the file path to be collected is a path in the container or on the host. If a hostPath volume is mounted to a container and you want to collect logs from files based on the mapped file path on the container host, set this parameter to Host Path. Otherwise, set it to Path in Container.
-
File Path: This denotes the directory for log collection, which must begin with a forward slash (/). For instance,
/data/wwwlogs/main/**/*.Log
refers to all files ending in .Log within the/data/wwwlogs/main
directory. To specify the maximum depth for directory monitoring—how deep the wildcard**
can go in the File Path—adjust the maximum directory monitoring depth value. A value of 0 means only the current directory is under surveillance.
-
-
-
Create Index and Preview Data: Simple Log Service automatically enables full-text indexing by default, indexing all log fields for queries. Additionally, you can manually create field indexes from the collected logs or click Auto Generate Index. This feature allows Simple Log Service to generate field indexes, enabling precise queries on specific fields, which reduces indexing costs and enhances query efficiency. For more information, see Create Index.
Environment Variables
This method supports only single-line text. To configure multi-line text or other log formats, you must use the custom resource method or configure it in the Simple Log Service console.
-
Configure Simple Log Service when creating an application.
Configure through the Container Console
-
Log on to the Container Service Management Console, and in the left-side navigation pane, select Cluster.
-
On the Cluster List page, click the name of the target cluster, and then in the left-side navigation pane, select .
-
On the Stateless page, set the namespace in the Namespace drop-down list at the top, and then click Create With Image in the upper-right corner.
-
On the Application Basic Information tab, set the Application Name, click Next, and go to the Container Configuration page to set the image name.
This section describes only the configurations related to Simple Log Service. For more information about other application configurations, see Create a Stateless Deployment.
-
In the Log Configuration area, configure log-related information.
-
Set the collection configuration.
Click Collection Configuration to create a new collection configuration. Each collection configuration consists of two items: Logstore and Log Path In Container.
-
Logstore: Specify the name of the Logstore that is used to store the collected log data. If the Logstore does not exist, ACK automatically creates a Logstore in the Simple Log Service project that is associated with your ACK cluster.
NoteThe default log retention period of Logstores is 90 days.
-
Log Path in Container: Specify the path from which you want to collect log data. For example, use /usr/local/tomcat/logs/catalina.*.log to collect the text logs of Tomcat.
All settings are added as configuration entries to the corresponding Logstore. By default, logs are collected in simple mode (by row).
-
-
Set a custom tag.
Click Custom Tag to create a new custom tag. Each custom tag is a key-value pair that is appended to the collected logs. You can use it to tag the log data of the container, such as the version number.
-
-
After you complete all configurations, click Next in the upper-right corner to proceed to the next step.
For more information about the subsequent steps, see Create a Stateless Deployment.
Configure through a YAML Template
-
Log on to the Container Service Management Console, and in the left-side navigation pane, select Cluster List.
-
On the Cluster List page, click the name of the target cluster, and then in the left-side navigation pane, select .
-
On the Stateless page, set the namespace in the Namespace drop-down list at the top, and then click Create Resource With YAML in the upper-right corner.
-
Configure the YAML file.
The syntax of the YAML template is the same as the Kubernetes syntax. However, to specify collection configurations for containers, you need to use
env
to add Collection Configuration and Custom Tag for the container, and create correspondingvolumeMounts
andvolumes
based on the collection configuration. The following is a simple example of a pod:apiVersion: v1 kind: Pod metadata: name: my-demo spec: containers: - name: my-demo-app image: 'registry.cn-hangzhou.aliyuncs.com/log-service/docker-log-test:latest' env: # Configure environment variables - name: aliyun_logs_log-varlog value: /var/log/*.log - name: aliyun_logs_mytag1_tags value: tag1=v1 # Configure volume mounting volumeMounts: - name: volumn-sls-mydemo mountPath: /var/log # If the pod is repetitively restarted, you can add a sleep command to the startup parameters of the pod command: ["sh", "-c"] # Run commands in the shell args: ["sleep 3600"] # Make the pod sleep 3,600 seconds (1 hour) volumes: - name: volumn-sls-mydemo emptyDir: {}
-
Create your Collection Configuration and Custom Tag through environment variables. All environment variables related to configuration use ```html
aliyun_logs_
as a prefix.-
Create collection configurations in the following format:
- name: aliyun_logs_log-varlog value: /var/log/*.log
The example creates a collection configuration in the format of
aliyun_logs_{key}
, where the corresponding{key}
islog-varlog
.-
aliyun_logs_log-varlog
: This env indicates the creation of aLogstore
namedlog-varlog
, with a log collection path of /var/log/*.log. The corresponding log service collection configuration name is alsolog-varlog
. The purpose is to collect the content of the /var/log/*.log file in the container into thelog-varlog
Logstore
.
-
-
Create Custom Tag in the following format:
- name: aliyun_logs_mytag1_tags value: tag1=v1
After a tag is added, the tag is automatically appended to the log data that is collected from the container. The
mytag1
isany name that does not contain an underscore (_)
.
-
-
If your collection configuration specifies a collection path other than stdout, you need to create the corresponding
volumeMounts
in this section.The example collection configuration adds collection for /var/log/*.log, so the corresponding
volumeMounts
for /var/log is added.
-
-
After you finish writing the YAML, click Create to submit the configuration to the Kubernetes cluster for execution.
-
-
Configure advanced parameters for environment variables.
You can configure container environment variables to customize log collection. You can set advanced parameters based on your actual needs to meet specific log collection requirements.
ImportantYou cannot use environment variables to configure log collection in edge computing scenarios.
Field
Description
Example
Notes
aliyun_logs_{key}
Required. {key} can contain only lowercase letters, digits, and hyphens (-).
If the specified aliyun_logs_{key}_logstore does not exist, a Logstore named {key} is created to store the collected log data.
If the value is stdout, it indicates that the stdout of a container is collected. Other values indicate the log path inside the container.
- name: aliyun_logs_catalina value: stdout
- name: aliyun_logs_access-log value: /var/log/nginx/access.log
The default log collection mode is simple mode. If you want to parse log content, it is recommended to use the Simple Log Service console or configure it through CRD.
{key} specifies the name of the Logtail configuration. The configuration name must be unique in the Kubernetes cluster.
aliyun_logs_{key}_tags
Optional. The value must be in the following format: {tag-key}={tag-value}. It is used to add tags to log data.
- name: aliyun_logs_catalina_tags value: app=catalina
N/A.
aliyun_logs_{key}_project
Optional. This variable specifies a project in Simple Log Service. If this environment variable is not configured, the project that you specified when you created the cluster is used.
- name: aliyun_logs_catalina_project value: my-k8s-project
The project must be deployed in the same region as Logtail.
aliyun_logs_{key}_logstore
Optional. This variable specifies a Logstore in Simple Log Service. If this environment variable is not configured, the Logstore is named {key}.
- name: aliyun_logs_catalina_logstore value: my-logstore
N/A.
aliyun_logs_{key}_shard
Optional. The variable specifies the number of shards of the Logstore. Valid values: 1 to 10. If this environment variable is not configured, the default value is 2.
NoteIf the Logstore that you specify already exists, this variable does not take effect.
- name: aliyun_logs_catalina_shard value: '4'
N/A.
aliyun_logs_{key}_ttl
Optional. The variable specifies the log retention period. Valid values: 1 to 3650.
To retain log data permanently, set the value to 3650.
If this environment variable is not configured, the default retention period is 90 days.
NoteIf the Logstore that you specify already exists, this variable does not take effect.
- name: aliyun_logs_catalina_ttl value: '3650'
N/A.
aliyun_logs_{key}_machinegroup
Optional. This variable specifies the node group in which the application is deployed. If this environment variable is not configured, the default node group is the one in which Logtail is deployed. For more information about this parameter, see Customization Requirement 2: Collect Data from Different Applications into Different Projects.
- name: aliyun_logs_catalina_machinegroup value: my-machine-group
N/A.
aliyun_logs_{key}_logstoremode
Optional. This variable specifies the type of the Logstore in Simple Log Service. If this parameter is not specified, the default value is standard. Valid values:
NoteIf the Logstore that you specify already exists, this variable does not take effect.
standard: Supports the one-stop data analysis feature of Simple Log Service. It is suitable for scenarios such as real-time monitoring, interactive analysis, and building a complete observability system.
query: Supports high-performance queries. The index traffic cost is about half of that of standard, but SQL analysis is not supported. It is suitable for scenarios with large data volumes, long storage periods (weeks or months), and no log analysis.
- name: aliyun_logs_catalina_logstoremode value: standard
- name: aliyun_logs_catalina_logstoremode value: query
This parameter requires the logtail-ds image version to be 1.3.1 or later.
-
Customization Requirement 1: Collect Data From Multiple Applications Into The Same Logstore
If you need to collect data from multiple applications into the same Logstore, you can set the aliyun_logs_{key}_logstore parameter. For example, the following configuration collects the stdout of two applications into stdout-logstore.
In the example, the
{key}
of Application 1 isapp1-stdout
, and the{key}
of Application 2 isapp2-stdout
.The environment variables configured for Application 1 are as follows:
# Configure environment variables - name: aliyun_logs_app1-stdout value: stdout - name: aliyun_logs_app1-stdout_logstore value: stdout-logstore
The environment variables configured for Application 2 are as follows:
# Configure environment variables - name: aliyun_logs_app2-stdout value: stdout - name: aliyun_logs_app2-stdout_logstore value: stdout-logstore
-
Customization Requirement 2: Collect Data From Different Applications Into Different Projects
If you need to collect data from different applications into multiple projects, perform the following steps:
-
Create a machine group in each project, select a custom ID, and name it
k8s-group-{cluster-id}
, where{cluster-id}
is your cluster ID. The machine group name can be customized. -
Configure the project, logstore, and machinegroup information in the environment variables of each application. The machine group name is the name of the machine group you created in the previous step.
In the following example, the
{key}
of Application 1 isapp1-stdout
, and the{key}
of Application 2 isapp2-stdout
. If the two applications are deployed in the same ACK cluster, you can use the same machine group for the applications.The environment variables configured for Application 1 are as follows:
# Configure environment variables - name: aliyun_logs_app1-stdout value: stdout - name: aliyun_logs_app1-stdout_project value: app1-project - name: aliyun_logs_app1-stdout_logstore value: app1-logstore - name: aliyun_logs_app1-stdout_machinegroup value: app1-machine-group
The environment variables configured for Application 2 are as follows:
# Configure environment variables for Application 2 - name: aliyun_logs_app2-stdout value: stdout - name: aliyun_logs_app2-stdout_project value: app2-project - name: aliyun_logs_app2-stdout_logstore value: app2-logstore - name: aliyun_logs_app2-stdout_machinegroup value: app1-machine-group
-
Step 3: Query and analyze logs
-
Log on to the Simple Log Service console.
-
In the Project List, click the target project to go to the project details page.
-
On the right side of the corresponding Logstore, click the
icon, select Query Analysis, and view the logs output by the Kubernetes cluster.
Default fields in container text logs
The table below describes the default fields included in each container text log.
Field name | Description |
__tag__:__hostname__ | The name of the container host. |
__tag__:__path__ | The log file path in the container. |
__tag__:_container_ip_ | The IP address of the container. |
__tag__:_image_name_ | The name of the image that is used by the container. |
__tag__:_pod_name_ | The name of the pod. |
__tag__:_namespace_ | The namespace to which the pod belongs. |
__tag__:_pod_uid_ | The unique identifier (UID) of the pod. |
References
-
Once you have collected log data, you can utilize the query and analysis feature of Simple Log Service for insights into your logs. For more information, see Quick guide to query and analysis.
-
With the visualization feature of Simple Log Service, you can intuitively understand and analyze your logs. For more information, see Quickly create a dashboard.
-
Use the alerting feature in Simple Log Service to automatically receive notifications of abnormalities in your logs. For more information, see Quickly set up log-based alerts.
-
Simple Log Service collects only new incremental logs. To learn about collecting historical log files, see Import historical log files.
-
Troubleshooting container log collection:
-
Verify if there are error messages in the console. For more information, see How to view Logtail collection error messages.
-
If the console shows no error messages, investigate the heartbeat of a machine group, Logtail configuration, and related issues. For more information, see How to troubleshoot container log collection exceptions.
-