All Products
Search
Document Center

Simple Log Service:Install Logtail components in an ACK cluster

Last Updated:Sep 03, 2024

This topic describes how to install and upgrade Logtail components in a Container Service for Kubernetes (ACK) cluster.

Background information

To collect container logs from a Kubernetes cluster, you must install Logtail components.

When you install Logtail components, Simple Log Service automatically completes the following operations:

  1. Create a ConfigMap named alibaba-log-configuration. The ConfigMap contains the configuration information of Simple Log Service, such as projects.

  2. Optional. Create a custom resource definition (CRD) named AliyunLogConfig.

  3. Optional. Create a Deployment named alibaba-log-controller. The Deployment is used to monitor the changes in the AliyunLogConfig CRD and the creation of Logtail configurations.

  4. Create a DaemonSet named logtail-ds to collect logs from nodes.

Install Logtail

You can install Logtail components in an existing ACK cluster. You can also install Logtail components when you create an ACK cluster. To install Logtail components when you create an ACK cluster, you must select Enable Log Service.

Install Logtail components in an existing ACK cluster

Important
  • If you are using an ACK dedicated cluster or an ACK managed cluster, you can follow the instructions in this section to install Logtail components in your ACK cluster.

    For more information about how to collect text logs, standard output (stdout), and standard errors (stderr) from containers in a serverless Kubernetes (ASK) cluster, see Use pod environment variables to collect application logs.

  • If your ACK cluster and Simple Log Service resources belong to different Alibaba Cloud accounts, you must configure the ID of the Alibaba Cloud account for which Simple Log Service is activated as a user identifier for your cluster. For more information, see Configure the ID of an Alibaba Cloud account as a user identifier.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, find the cluster in which you want to install Logtail components and choose More > Operations > Manage Components in the Actions column.

  4. On the Logs and Monitoring tab, find logtail-ds and click Install.

    After logtail-ds is installed, Simple Log Service automatically creates a project named k8s-log-${your_k8s_cluster_id}, a machine group named k8s-group-${your_k8s_cluster_id}, and a Logstore named config-operation-log in the project.

    Important

    Do not delete the config-operation-log Logstore.

Install Logtail components when you create an ACK cluster

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, click Create Kubernetes Cluster.

  4. In the Component Configurations step, select Enable Log Service.

    Note

    In this example, only the steps that are required to enable Simple Log Service are provided. For more information about how to create an ACK cluster, see Create an ACK managed cluster.

    If you select Enable Log Service, the system prompts you to create a Simple Log Service project. For more information about how logs are managed in Simple Log Service, see Project. You can use one of the following methods to create a project:

    • Select Project

      You can select an existing project to manage the container logs that are collected.

      安装logtail组件

    • Create Project

      Simple Log Service automatically creates a project named k8s-log-{ClusterID} to manage the container logs that are collected. ClusterID specifies the unique ID of the ACK cluster that you create.

      安装logtail组件

After the Logtail components are installed, a machine group named k8s-group-${your_k8s_cluster_id} and a Logstore named config-operation-log are automatically created in your project.

Important

Do not delete the config-operation-log Logstore.

View the status, version number, and IP address of Logtail

View the status of Logtail

Run the following command to view the status of Logtail:

kubectl get po -n kube-system | grep logtail

The following output is returned:

NAME            READY     STATUS    RESTARTS   AGE
logtail-ds-gb92k   1/1       Running   0          2h
logtail-ds-wm7lw   1/1       Running   0          4d

View the version number and IP address of Logtail

Run the following command to view the version number and IP address of Logtail:

kubectl exec logtail-ds-gb92k -n kube-system cat /usr/local/ilogtail/app_info.json

The following output is returned:

{
   "UUID" : "",
   "hostname" : "logtail-ds-gb92k",
   "instance_id" : "0EBB2B0E-0A3B-11E8-B0CE-0A58AC140402_172.20.4.2_1517810940",
   "ip" : "192.0.2.0",
   "logtail_version" : "0.16.2",
   "os" : "Linux; 3.10.0-693.2.2.el7.x86_64; #1 SMP Tue Sep 12 22:26:13 UTC 2017; x86_64",
   "update_time" : "2021-02-05 06:09:01"
}

Upgrade Logtail

Back up files

Important

An upgrade requires a few seconds to complete. During an upgrade, the Logtail container is restarted, which may cause a small amount of data to be collected again or lost during collection.

Before you upgrade the Logtail components, we recommend that you back up the description files that are related to the Logtail components.

kubectl get ds -n kube-system logtail-ds -o yaml > logtail-ds.yaml
kubectl get deployment -n kube-system alibaba-log-controller -o yaml > alibaba-log-controller.yaml
kubectl get crd aliyunlogconfigs.log.alibabacloud.com -o yaml > aliyunlogconfigs-crd.yaml
kubectl get cm -n kube-system alibaba-log-configuration -o yaml > alibaba-log-configuration.yaml
kubectl get aliyunlogconfigs --all-namespaces -o yaml > aliyunlogconfigs-cr.yaml

Upgrade components

We recommend that you use the automatic upgrade method in common scenarios. If you have modified parameters such as environment variables in the logtail-ds DaemonSet or alibaba-log-controller Deployment, we recommend that you use the manual upgrade method to retain your modifications.

Automatic upgrade

Important

If you use the automatic upgrade method, your modifications to the parameters in the logtail-ds DaemonSet and alibaba-log-controller Deployment are not retained.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, find the cluster in which you want to install Logtail components and choose More > Operations > Manage Components in the Actions column.

  4. On the Logs and Monitoring tab, find logtail-ds and click Upgrade.

  5. In the Update dialog box, click OK.

    Important

    If the component cannot be upgraded to the most up-to-date Logtail version, the Kubernetes version of your cluster is outdated. In this case, you must upgrade the Kubernetes version of your cluster first or use the manual upgrade method.

    After the upgrade is performed, you can view the status of each logtail-ds pod in the ACK console. If each logtail-ds pod is in the running state, the upgrade is successful.

Manual upgrade

Important

If you use the manual upgrade method, your existing configurations are not updated, and some features may not be supported.

A manual upgrade covers both logtail-ds and alibaba-log-controller. In most cases, you need to only upgrade logtail-ds to obtain the collection capabilities provided in the most up-to-date version of Logtail. If you want to obtain the CRD-based collection capabilities provided in the most up-to-date version of Logtail, you must also upgrade alibaba-log-controller. The following procedure shows how to upgrade logtail-ds.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, find the cluster in which you want to install Logtail components and choose More > Operations > Manage Components in the Actions column.

  4. Choose Workloads > DaemonSets.

    Note

    If you want to upgrade alibaba-log-controller, choose Workloads > Deployments. Then, set Namespace to kube-system and find alibaba-log-controller.

  5. Set Namespace to kube-system. Then, find logtail-ds and click Edit in the Actions column.

  6. Check whether the required environment variables exist.

    If the ALIYUN_LOGTAIL_CONFIG, ALIYUN_LOGTAIL_USER_ID, or ALIYUN_LOGTAIL_USER_DEFINED_ID environment variable does not exist, your Logtail version is outdated. You can submit a ticket for technical support.

  7. Click Select Image Version to the right of Image Version.

  8. In the Image Version dialog box, click the most up-to-date version and click OK.

  9. In the right-side pane of the page that appears, click Update.

    After the upgrade is performed, you can view the status of each logtail-ds pod in the ACK console. If each logtail-ds pod is in the running state, the upgrade is successful.

Upgrade the latest version of Logtail

The YAML file used in the Logtail version named latest is outdated. If you use this version, issues may occur during an upgrade or when new features are used. We recommend that you upgrade the latest version to the most up-to-date version. Perform the following steps:

  1. Store the existing AliyunLogConfig CRD.

    Replace log-crds.yaml based on your business scenario.

    kubectl get AliyunLogConfig -A -o yaml > log-crds.yaml
  2. Uninstall logtail-ds.

    On the Logs and Monitoring tab in the ACK console, find logtail-ds and click Uninstall. For more information, see Uninstall Logtail.

  3. Install logtail-ds.

    On the Logs and Monitoring tab in the ACK console, find logtail-ds and click Install. For more information, see Install Logtail.

  4. Deploy the stored AliyunLogConfig CRD.

    Replace log-crds.yaml based on your business scenario.

    kubectl apply -f  log-crds.yaml

Roll back an upgrade

The following procedure shows how to roll back to a specific version.

Note

The YAML files that are backed up before an upgrade contain redundant information. Before you can restore the configurations of Logtail, you must manually delete the redundant information. You can use the kubectl-neat tool to delete the redundant information. You must delete the following fields: metadata.creationTimestamp, metadata.generation, metadata.resourceVersion, metadata.uid, and status.

  1. Check whether to retain the new configurations of Logtail after the upgrade.

    If you do not need to retain the new configurations of Logtail after the upgrade, delete the configurations.

  2. Delete redundant information from the backup files.

    cat logtail-ds.yaml | kubectl-neat > neat-logtail-ds.yaml
    cat alibaba-log-controller.yaml | kubectl-neat > neat-alibaba-log-controller.yaml
    cat aliyunlogconfigs-crd.yaml | kubectl-neat > neat-aliyunlogconfigs-crd.yaml
    cat alibaba-log-configuration.yaml | kubectl-neat > neat-alibaba-log-configuration.yaml
    cat aliyunlogconfigs-cr.yaml | kubectl-neat > neat-aliyunlogconfigs-cr.yaml
  3. Use the backup files after you delete redundant information to restore the configurations of Logtail.

    kubectl apply -f neat-logtail-ds.yaml
    kubectl apply -f neat-alibaba-log-controller.yaml
    kubectl apply -f neat-aliyunlogconfigs-crd.yaml
    kubectl apply -f neat-alibaba-log-configuration.yaml
    kubectl apply -f neat-aliyunlogconfigs-cr.yaml

Uninstall Logtail

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, find the cluster in which you want to install Logtail components and choose More > Operations > Manage Components in the Actions column.

  4. On the Logs and Monitoring tab, find logtail-ds and click Uninstall.

  5. Click OK as prompted.

What to do next

Create Logtail configurations to collect container logs.

FAQ

How do I view the version of a container image?

You can visit one of the following image repositories to view the version of a container image:

How do I collect container logs from multiple Kubernetes clusters to the same Simple Log Service project?

Note

You can collect container logs from multiple Kubernetes clusters to the same Simple Log Service project only if the Kubernetes clusters reside in the same region.

If you want to collect container logs from multiple ACK clusters to the same Simple Log Service project, you must select the same project when you create the ACK clusters.

How do I view the logs of Logtail?

The logs of Logtail are stored in the ilogtail.LOG and logtail_plugin.LOG files in the /usr/local/ilogtail/ directory of a Logtail container.

The stdout and stderr of the container do not apply to the sample scenario. Ignore the following stdout and stderr:

start umount useless mount points, /shm$|/merged$|/mqueue$
umount: /logtail_host/var/lib/docker/overlay2/3fd0043af174cb0273c3c7869500fbe2bdb95d13b1e110172ef57fe840c82155/merged: must be superuser to unmount
umount: /logtail_host/var/lib/docker/overlay2/d5b10aa19399992755de1f85d25009528daa749c1bf8c16edff44beab6e69718/merged: must be superuser to unmount
umount: /logtail_host/var/lib/docker/overlay2/5c3125daddacedec29df72ad0c52fac800cd56c6e880dc4e8a640b1e16c22dbe/merged: must be superuser to unmount
......
xargs: umount: exited with status 255; aborting
umount done
start logtail
ilogtail is running
logtail status:
ilogtail is running

How do I view the status of Simple Log Service components in Kubernetes clusters?

Run the following commands:

kubectl get deploy alibaba-log-controller -n kube-system
kubectl get ds logtail-ds -n kube-system

What do I do if alibaba-log-controller fails to start?

Check whether alibaba-log-controller is installed by using the following method:

  • Run the installation command on the master node of your Kubernetes cluster.

  • Specify the ID of your Kubernetes cluster in the installation command.

If alibaba-log-controller fails to be installed by using the preceding method, run the kubectl delete -f deploy command to delete the installation template that is generated. Then, rerun the installation command.

How do I view the status of the logtail-ds DaemonSet in a Kubernetes cluster?

Run the kubectl get ds -n kube-system command to view the status of the Logtail-ds DaemonSet.

Note

The default namespace to which a Logtail container belongs is kube-system.

How do I view the operational logs of Logtail?

The operational logs of Logtail are stored in the ilogtail.LOG file in the /usr/local/ilogtail/ directory. If the log file is rotated, the generated files are compressed and stored as ilogtail.LOG.x.gz. Run the following command to view the logs:

kubectl exec logtail-ds-gb92k -n kube-system tail /usr/local/ilogtail/ilogtail.LOG

The following output is returned:

[2018-02-05 06:09:02.168693] [INFO] [9] [build/release64/sls/ilogtail/LogtailPlugin.cpp:104] logtail plugin Resume:start
[2018-02-05 06:09:02.168807] [INFO] [9] [build/release64/sls/ilogtail/LogtailPlugin.cpp:106] logtail plugin Resume:success
[2018-02-05 06:09:02.168822] [INFO] [9] [build/release64/sls/ilogtail/EventDispatcher.cpp:369] start add existed check point events, size:0
[2018-02-05 06:09:02.168827] [INFO] [9] [build/release64/sls/ilogtail/EventDispatcher.cpp:511] add existed check point events, size:0 cache size:0 event size:0 success count:0

How do I restart Logtail for a pod?

  1. Stop Logtail.

  2. In the following command, logtail-ds-gb92k -n specifies the container, and kube-system specifies the namespace. Configure the parameters based on your business scenario.

  3. kubectl exec logtail-ds-gb92k -n kube-system /etc/init.d/ilogtaild stop

    If the following output is returned, Logtail is stopped:

    kill process Name: ilogtail pid: 7
    kill process Name: ilogtail pid: 9
    stop success
  4. Start Logtail.

  5. In the following command, logtail-ds-gb92k -n specifies the container, and kube-system specifies the namespace. Configure the parameters based on your business scenario.

  6. kubectl exec logtail-ds-gb92k -n kube-system /etc/init.d/ilogtaild start

    If the following output is returned, Logtail is started:

    ilogtail is running 

References