By setting up an SLS trigger, you can seamlessly integrate Simple Log Service (SLS) with Function Compute. An SLS trigger automatically initiates function execution upon the arrival of new logs, enabling the incremental consumption and custom processing of data from a Logstore.
Scenarios
-
Data cleansing and processing scenarios
Quickly complete log collection, processing, query, and analysis through Simple Log Service.
-
Data shipping scenarios
Support data landing at the destination and build data pipelines between big data products on the cloud.
Data processing functions
Function types
-
Template functions
For more information, see aliyun-log-fc-functions.
-
Custom functions
The format of the function configuration is related to the specific implementation of the function. For more information, see ETL function development guide.
Function Compute trigger mechanism
A Simple Log Service ETL task corresponds to a trigger in Function Compute. Upon setting up an ETL task in Simple Log Service, it initiates a timer according to the task's configuration. This timer regularly checks for new data in the Logstore shards. If new data is found, a <shard_id,begin_cursor,end_cursor>
trituple is created as an event, which then initiates the function's execution. This event is delivered by Simple Log Service (SLS) to Function Compute.
Please note that during storage system upgrades, cursor information may change even without new data being written. Consequently, each shard might trigger once without actual data. You can attempt to retrieve data from the shard using the cursor within the function. If no data is retrieved, it suggests an empty trigger, which can be disregarded in the function. For more information, see the Custom function development guide.
The ETL task for Simple Log Service operates on a time-based trigger mechanism. For instance, setting the ETL job's trigger interval to 60 seconds means that if data is continuously written to Shard0 in the Logstore, the shard will initiate function execution every 60 seconds. However, if the shard receives no new data, it will not trigger the function. The function's input consists of the cursor range from the latest 60-second period. You can use this cursor to read data from Shard0 and process it further within the function.
Limits
The number of Simple Log Service triggers associated with a single log project must not exceed five times the number of Logstores in the project.
We recommend that you configure no more than five Simple Log Service triggers for each Logstore. Otherwise, data may not be efficiently shipped to Function Compute.
Sample scenarios
You can configure a Simple Log Service trigger to periodically obtain updated data and trigger function execution to incrementally consume data in a Logstore. In the function, you can complete custom processing tasks such as data cleansing and processing and deliver data to third-party services. This example only shows how to obtain and print log data.
The function used for data processing can be a template provided by Simple Log Service or a custom function.
Prerequisites
-
Function Compute is Alibaba Cloud's serverless computing service, offering event-driven execution and automatic scaling.
-
Simple Log Service (SLS) is an Alibaba Cloud service that offers log data management and analysis.
-
Create a log project and Logstore
You need to create a project and two Logstores. One Logstore is used to store collected logs (Function Compute triggers based on incremental logs, so you must ensure that logs can be continuously collected), and the other Logstore is used to store logs generated by the Simple Log Service trigger.
-
The log project's region must match the region of the Function Compute service.
Input parameter description
-
event
When the Simple Log Service trigger activates, it sends the event data to the runtime. The runtime then converts this data into a JSON object and delivers it to the function's input parameter
event
. The JSON format is as follows:{ "parameter": {}, "source": { "endpoint": "http://cn-hangzhou-intranet.log.aliyuncs.com", "projectName": "fc-test-project", "logstoreName": "fc-test-logstore", "shardId": 0, "beginCursor": "MTUyOTQ4MDIwOTY1NTk3ODQ2Mw==", "endCursor": "MTUyOTQ4MDIwOTY1NTk3ODQ2NA==" }, "jobName": "1f7043ced683de1a4e3d8d70b5a412843d81****", "taskId": "c2691505-38da-4d1b-998a-f1d4bb8c****", "cursorTime": 1529486425 }
Below is a description of the parameters:
Parameter
Description
parameter
The value of the invocation parameter that you entered when you configured the trigger.
source
The log block information that you want the function to read.
endpoint: The Alibaba Cloud region where the log project is located.
projectName: The name of the log project.
logstoreName: The name of the Logstore that Function Compute wants to consume. The current trigger periodically subscribes to data from this Logstore to the function service for custom processing.
shardId: A specific shard in the Logstore.
beginCursor: The position where data consumption starts.
endCursor: The position where data consumption stops.
NoteWhen you debug the function, you can call the GetCursor - Query cursor by time API to obtain beginCursor and endCursor, and construct a function event for testing based on the preceding example.
jobName
The name of the Simple Log Service ETL job. The Simple Log Service trigger configured for the function corresponds to a Simple Log Service ETL job.
This parameter is automatically generated by Function Compute. You do not need to configure it.
taskId
For an ETL job, taskId is a deterministic function invocation identifier.
This parameter is automatically generated by Function Compute. You do not need to configure it.
cursorTime
The Unix timestamp of the time when the last log arrives at Simple Log Service. Unit: seconds.
-
context
When Function Compute executes your function, it provides the function's input parameter
context
with a context object. This object includes details about the invocation, service, function, , and the execution environment.This topic describes how to obtain key information through
context.credentials
. For more information about other fields, see Context.
Step 1: Create a Simple Log Service trigger
Log on to the Function Compute console. In the left-side navigation pane, click Functions.
In the top navigation bar, select a region. On the Functions page, click the function that you want to manage.
On the function details page, click the Configurations tab. In the left-side navigation pane, click Triggers. Then, click Create Trigger.
-
To create a trigger, enter the necessary information in the panel and then click Confirm.
Configuration item
Operation
Example
Trigger Type
Select Simple Log Service SLS.
Simple Log Service SLS
Name
Enter a custom trigger name. If you do not enter a name, Function Compute automatically generates a trigger name.
log_trigger
Version Or Alias
The default value is LATEST. If you want to create a trigger of another version or alias, you must first switch to the version or alias in the upper-right corner of the function details page. For more information about versions and aliases, see Version management and Alias management.
LATEST
Log Project
Select the log project that you want to consume.
aliyun-fc-cn-hangzhou-2238f0df-a742-524f-9f90-976ba457****
Logstore
Select the Logstore that you want to consume. The current trigger periodically subscribes to data from this Logstore to the function service for custom processing.
function-log
Trigger Interval
Enter the time interval at which Simple Log Service triggers the function.
Valid values: [3,600]. Unit: seconds. Default value: 60.
60
Retries
Enter the maximum number of retries that are allowed for a single trigger operation.
Valid values: [0,100]. Default value: 3.
NoteThe execution is successful if status=200 and the value of the
X-Fc-Error-Type
parameter in the header is notUnhandledInvocationError
orHandledInvocationError
. In other cases, the execution fails, and a retry is triggered. For more information about theX-Fc-Error-Type
parameter, see Return data.If the function fails to be executed, the current request is retried until the function is successfully executed. The number of retries follows the value of this parameter. If the function still fails after the value of this parameter is reached, the system retries the request in exponential backoff mode with increased intervals.
3
Trigger Logs
Select the Logstore to which you want to store the logs that are generated when Simple Log Service invokes the function.
function-log2
Invocation Parameters
If you want to pass custom parameters, you can configure them here. The parameters are passed to the function as the parameter parameter of the event. The value of this parameter must be a string in JSON format.
This parameter is left empty by default.
None
Role Name
Select AliyunLogETLRole.
NoteIf you create this type of trigger for the first time, you must select Authorize Now in the dialog box that appears after you click Confirm.
AliyunLogETLRole
After the trigger is created, it is displayed on the Triggers tab. To modify or delete a trigger, see Trigger management.
Step 2: Configure permissions
-
On the Function Details page, select
, and then click Edit. In the permissions page that appears, select Function Role.-
You can use the default role AliyunFCServerlessDevsRole, which has read-only permissions on Simple Log Service by default.
-
You can also create a custom RAM role. The custom RAM role must meet the following two requirements:
-
To create a RAM role, select "Alibaba Cloud Service" and choose "Function Compute" as the trusted service. For more information, see how to create a RAM role with Alibaba Cloud service as the trusted entity.
-
You must grant the RAM role the necessary permissions on Simple Log Service based on the specific requirements of the function. For more information, see RAM custom authorization example.
-
-
-
After the configuration is complete, click Deploy.
Step 3: Deploy the function and view the printed logs
-
Navigate to the Code tab on the function details page, enter your code into the editor, and then click Deploy Code to apply your changes.
This example deploys a Python function to implement the following features.
-
Obtain SLS event trigger related information such as
endpoint
,projectName
,logstoreName
, andbeginCursor
fromevent
. -
Obtain authorization information such as
accessKeyId
,accessKey
, andsecurityToken
fromcontext
. -
Initialize the SLS client based on the obtained information.
-
Obtain log data from the specified cursor position in the source Logstore.
NoteThe following sample code provides a template on how to extract most logical logs.
""" This sample code is mainly doing the following things: * Get SLS processing related information from event * Initiate SLS client * Pull logs from source log store """ #!/usr/bin/env python # -*- coding: utf-8 -*- import logging import json import os from aliyun.log import LogClient logger = logging.getLogger() def handler(event, context): # Access keys can be fetched through context.credentials print("The content in context entity is: ", context) creds = context.credentials access_key_id = creds.access_key_id access_key_secret = creds.access_key_secret security_token = creds.security_token # parse event in object event_obj = json.loads(event.decode()) print("The content in event entity is: ", event_obj) # Get the name of log project, the name of log store, the endpoint of sls, begin cursor, end cursor and shardId from event.source source = event_obj['source'] log_project = source['projectName'] log_store = source['logstoreName'] endpoint = source['endpoint'] begin_cursor = source['beginCursor'] end_cursor = source['endCursor'] shard_id = source['shardId'] # Initialize client of sls client = LogClient(endpoint=endpoint, accessKeyId=access_key_id, accessKey=access_key_secret, securityToken=security_token) # Read data from source logstore within cursor: [begin_cursor, end_cursor) in the example, which contains all the logs trigger the invocation while True: response = client.pull_logs(project_name=log_project, logstore_name=log_store, shard_id=shard_id, cursor=begin_cursor, count=100, end_cursor=end_cursor, compress=False) log_group_cnt = response.get_loggroup_count() if log_group_cnt == 0: break logger.info("get %d log group from %s" % (log_group_cnt, log_store)) logger.info(response.get_loggroup_list()) begin_cursor = response.get_next_cursor() return 'success'
-
-
On the Function Details page, select
to view the latest data obtained during function execution. If The Log Feature Is Not Enabled For The Current Function, click Enable With One Click.
At this point, you have completed the configuration of the Simple Log Service trigger. If you need to debug the code in the console, continue with the following steps.
(Optional) Step 4: Test the function by using a simulated event
On the Code tab of the function details page, click the
icon next Test Function and select Configure Test Parameters from the drop-down list.
-
In the Configure Test Parameters panel, you can either select Create A New Test Event or Edit An Existing Test Event. Enter the event name and content, then click Confirm. When creating a new test event, it is recommended to use Simple Log Service as the event template. For more information on configuring test data, see event.
-
Once you have configured the virtual event, click on the Test Function button.
After the execution is complete, you can view the execution result at the top of the Function Code tab.
FAQ
-
What should I do if the Simple Log Service trigger does not trigger function execution when new logs are generated?
You can troubleshoot the issue by using the following methods.
-
Check whether new data is written to the Logstore for which your Function Compute trigger is configured. If new data is written to the Logstore, the function is called.
-
Check whether exceptions can be found in trigger logs and operational logs of functions.
-
-
Why is the frequency of function execution triggered by the Simple Log Service trigger sometimes higher than expected?
A function is separately called for each shard. Even if the number of times that a function is called for shards in a Logstore is large, the interval at which the function is called for each shard can be consistent with the call interval that is specified.
The call interval at which the function is called for a shard is the same as the time interval that is specified for data transformation. The following list describes two scenarios with a specified call interval of 60 seconds.
-
No latency exists: The function is called at the specified interval of 60 seconds to transform data that is generated within the time range of
[now -60s, now)
.NoteEach shard triggers a separate function call. In a Logstore with 10 shards, assuming no latency issues, the function will be invoked 10 times at intervals of 60 seconds to process data in real time.
-
Latency exists. The time difference between the point in time at which data in a Simple Log Service shard is transformed and the point in time at which the latest data is written to Simple Log Service is greater than 10 seconds. In this case, the trigger shortens the call interval. For example, the function can be called at 2-second intervals to transform data that is generated within 60 seconds.
-
-
denied by sts or ram, action: log:GetCursorOrData, resource: ****
If the preceding error occurs in the function logs, the error may be caused because the function is not configured with permissions or the permission policy is incorrectly configured. For more information, see Step 2: Configure permissions.