All Products
Search
Document Center

Function Compute:Use Loggie in custom runtimes

Last Updated:Mar 28, 2024

Loggie is a lightweight, high-performance, and cloud-native agent that is based on Golang and can be used to collect logs. You can use Loggie in functions that run in custom runtimes to collect logs from files, and upload the logs to Simple Log Service for storage and analysis.

Prerequisites

A project and a Logstore are created in Simple Log Service. For more information, see Create a project and Create a Logstore.

Important

The log project that you created must be in the same region as the function to be created.

Procedure

Step 1: Create a function

  1. Log on to the Function Compute console. In the left-side navigation pane, click Functions.

  2. In the top navigation bar, select a region. On the Functions page, click Create Function.

  3. On the Create Function page that appear, configure the following parameters, retain the default values for other parameters, and then click Create. For more information, see Create a function.

    • Method to create the function: Web Function

    • Basic Settings: Configure Function Name.

    • Code: Configure the runtime and code-related information of the function.

      Parameter

      Example

      Sample Code

      Python 3.9

      Code Upload Method

      Select Upload Folder. The name of the folder to upload is code. The app.py file is stored in the code directory. The following code snippet provides sample code of app.py:

      from flask import Flask
      from flask import request
      import logging
      import os
      
      REQUEST_ID_HEADER = 'x-fc-request-id'
      
      app = Flask(__name__)
      
      format_str = '[%(asctime)s] %(levelname)s in %(module)s: %(message)s'
      logging.basicConfig(filename='/tmp/log/fc-flask.log', filemode='w', 
          format=format_str, encoding='utf-8', level=logging.DEBUG)
      @app.route("/invoke", methods = ["POST"])
      def hello_world():
          rid = request.headers.get(REQUEST_ID_HEADER)
          logger = logging.getLogger()
      
          print("FC Invoke Start RequestId: " + rid)
          logger.info("FC Invoke Start RequestId: " + rid)
      
          data = request.stream.read()
          print(str(data))
          logger.info("receive event: {}".format(str(data)))
          
          print("FC Invoke End RequestId: " + rid)
          logger.info("FC Invoke Start RequestId: " + rid)
          return "Hello, World!"
      
      if __name__ == '__main__':
          app.run(host='0.0.0.0',port=9000)
      Note

      You can change filename='/tmp/log/fc-flask.log' to the actual log type and log location in the code. Make sure that the configuration is consistent with the value of sources.paths configured in Step 2.

      Startup Command

      /code/bootstrap

      Note

      The bootstrap file is created in Step 2.

      Listening Port

      9000

Step 2: Create a bootstrap file to serve as the startup command

  1. After the function is created, use WebIDE on the Code tab to create a bootstrap file in the code directory.

    The following sample code shows an example bootstrap file.

    #!/bin/bash
    
    #1. Create the pipelines.yml file.
    mkdir -p /tmp/log /code/etc
    cat << EOF > /code/etc/pipelines.yml
    pipelines:
      - name: demo
        sources:
          - type: file
            name: fc-demo
            addonMeta: true
            fields:
              topic: "loggie"
            fieldsUnderRoot: true
            paths:
              - "/tmp/log/*.log"
        sink:
          type: sls
          endpoint: ${LOGGIE_SINK_SLS_ENDPOINT}
          accessKeyId: ${LOGGIE_SINK_SLS_ACCESS_ID}
          accessKeySecret: ${LOGGIE_SINK_SLS_ACCESS_SECRET}
          project: ${LOGGIE_SINK_SLS_PROJECT}
          logstore: ${LOGGIE_SINK_SLS_LOGSTORE}
          topic: ${LOGGIE_SINK_SLS_TOPIC}
    EOF
    
    #2. Create the loggie.yml file.
    cat << EOF > /code/etc/loggie.yml
    EOF
    
    #3. Start Loggie and run it as a background process.
    /opt/bin/loggie -config.system=/code/etc/loggie.yml -config.pipeline=/code/etc/pipelines.yml > /tmp/loggie.log 2>&1 &
    
    #4. Start the application.
    exec python app.py

    The script performs the following operations:

    1. Create the pipelines.yml file, which is the pipeline configuration file.

      • sources

        Specifies the type and path of logs. This example shows how to collect logs from all files that end with .log in the /tmp/log directory.

        addonMeta in sources specifies to add the default state metadata of log collection. For more information about sources, see Overview.

      • sink

        Specifies information about Simple Log Service. The variables in the script are set in Step 4.

    2. Create the loggie.yml file, which is the Loggie configuration file.

      If this file is empty, the default configurations are used. In this example, the default configurations are used. However, the loggie.yml file must exist. If the file is not empty, see Overview for the detailed configurations.

    3. Start Loggie and run it as a background process. Running logs of Loggie are printed to the /tmp/loggie.log file.

    4. Start the application. In this example, a Python runtime is used. The actual runtime that you use shall prevail.

  2. Configure the executable permissions on the bootstrap file.

    In WebIDE, choose Terminal > New Terminal and run the chmod 777 bootstrap command to configure the permissions.

  3. Click Deploy to deploy the code.

Step 3: Add the Loggie Agent common layer

  1. Click the Configuration tab. On the Layers tab, click Modify in the Layers section.

  2. In the Layers panel, choose Add Layer > Add Official Common Layer and configure the Loggie Agent layer.

    The following table provides information about the Loggie Agent common layer.

    Layer name

    Compatible runtime

    Layer version

    ARN

    Loggie Agent

    Custom runtimes

    This example uses Version 1.

    acs:fc:{region}:official:layers/Loggie13x/versions/1

  3. Click Deploy to add the Loggie Agent layer.

Step 4: Configure environment variables

  1. On the Configuration tab, click the Environment Variables tab and click Modify.

  2. In the Environment Variables panel, add the following environment variables. For more information about how to configure environment variables, see Environment variables.

    • Set FC_EXTENSION_SLS_LOGGIE to true.

      After you add this environment variable, an instance is frozen 10 seconds after an invocation. This ensures that Loggie can report logs as expected.

      Important

      This method generates fees, in the same way as Prefreeze hooks. For more information, see Billing rules.

    • Set environment variables in the pipelines.yml file, including LOGGIE_SINK_SLS_ENDPOINT, LOGGIE_SINK_SLS_ACCESS_ID, LOGGIE_SINK_SLS_ACCESS_SECRET, LOGGIE_SINK_SLS_PROJECT, LOGGIE_SINK_SLS_LOGSTORE, and LOGGIE_SINK_SLS_TOPIC.

      Environment variable

      Description

      LOGGIE_SINK_SLS_ENDPOINT

      The endpoint of Simple Log Service. For more information, see Endpoints.

      LOGGIE_SINK_SLS_ACCESS_ID

      The AccessKey ID. For more information about how to obtain an AccessKey ID, see AccessKey pair.

      LOGGIE_SINK_SLS_ACCESS_SECRET

      The AccessKey secret. For more information about how to obtain an AccessKey secret, see AccessKey pair.

      LOGGIE_SINK_SLS_PROJECT

      The project to which the Logstore belongs.

      LOGGIE_SINK_SLS_LOGSTORE

      The Logstore that is used to store logs.

      LOGGIE_SINK_SLS_TOPIC

      The topic of logs. You can specify a custom value.

  3. Click Deploy. After the function configurations are updated, execution logs of the function can be uploaded to Simple Log Service by using Loggie.

Step 4: Verify results

  1. On the Code tab, click Test Function to debug the function in the Function Compute console.

    You may experience some delay in the first test. We recommend that you invoke the function several more times.

  2. Log on to the Simple Log Service console. Query logs based on the region, project, and Logstore information in the pipelines.yml file. The following figure shows an example.

    image

    • body: the log body.

    • state.*: the state metadata of log collection. hostname is the ID of the instance where the function runs.

Troubleshooting

Loggie runs independently in a function instance. Function Compute cannot detect whether the Loggie is normal. Execution of functions is not affected even if Loggie is abnormal.

When you query logs that are related to Loggie in Simple Log Service, latency of seceral seconds may occur. If you cannot find logs that are related to Loggie in Simple Log Service, perform the following operations to troubleshoot the issue:

Function runs as expected

In this case, the instance is alive for several minutes after the function is invoked. You can log on to the instance to view the running status and logs of Loggie. For more information about how to log on to an instance, see CLI-based instance management.

  • If logs do not exist, you can start Loggie in the command line.

  • If logs exist, troubleshoot the issue based on the logs.

    • Check whether the pipelines.yml file is correctly configured.

    • Check whether the Simple Log Service sink is successfully started. Logs are similar to pipeline sink(sink/sls)-0 invoke loop start.

    • Check whether the log file is obtained. Logs are similar to start collect file: /tmp/log/fc-flask.log. If no similar log is available, check whether a log file is generated based on the paths path in the pipelines.yml file configuration.

Note

The first time you connect to a Simple Log Service Logstore, latency may occur. If no anomaly is found in the logs, you can invoke the function multiple times and wait a few minutes before you query the logs.

Function fails to run

If a function fails to run, you can remove the startup logic from Loggie and check whether the function is running as expected. In most cases, Loggie does not affect the running of functions because Loggie is an external extension. If unexpected exits of processes or execution timeouts occur, you can scale up the memory or increase the CPU specifications.

References

  • For more information about Loggie, see Loggie.

  • In this example, Loggie collects logs and uploads them as is, without any processing. If you want to process logs before you upload them, such as parsing JSON format logs or removing DEBUG logs, you can add Interceptor configurations in the pipelines.yml file. For more information, see Overview.