If you want to upload multiple objects at a time, upload objects with the original directory structure retained, upload a complete list of objects, or assign resources to objects, you can configure a ZIP package decompression rule and upload ZIP objects to the specified directory in an Object Storage Service (OSS) bucket. Function Compute automatically decompresses ZIP objects based on the decompression rule and returns the decompressed data to the specified directory in OSS.
Prerequisites
Function Compute is activated. If Function Compute is not activated, activate it on the Function Compute page.
Scenarios
Batch upload: Uploading a large number of small objects can be time-consuming. To improve upload efficiency, you can configure a ZIP package decompression rule and upload a ZIP package of objects.
Resource completeness: ZIP package decompression is useful in cases where you may want to combine multiple files into a complete resource for better upload efficiency, instead of uploading them separately.
Directory structure preservation: A static website can contain a large number of static resources and have a relatively deep directory structure. It takes a lot of time to create individual directories in OSS and upload resources to these directories. In this case, you can configure a ZIP package decompression rule, locally create a ZIP package that has an intended directory structure, and upload the ZIP package to OSS.
Content delivery: If you need to deliver a large number of objects to different users or servers, you can compress the objects into a ZIP package and use ZIP package decompression to decompress the ZIP package to specified directories when it is uploaded to OSS. This reduces transmission time and bandwidth consumption.
How it works
OSS uses Function Compute to decompress ZIP packages uploaded to OSS. The following figure shows the decompression process.
A ZIP package is uploaded to a directory that is specified by the Prefix parameter in a ZIP package decompression rule.
The Compute Function trigger is used.
You need to authorize the trigger when you configure a ZIP package decompression rule so that OSS can use the trigger role AliyunOSSEventNotificationRole to access Function Compute. The permissions to access Function Compute are granted to the role by default when you authorize a trigger role.
Function Compute decompresses the ZIP package and saves the decompressed data to the specified OSS directory.
You must authorize Function Compute to access OSS when you configure a ZIP package decompression rule. During authorization, you must create a role that allows Function Compute to read objects from an OSS bucket and to write decompressed data to the bucket. When you create such a role, the read and write permissions on the bucket are granted to the role.
Billing
ZIP package decompression is a value-added feature that incurs fees in OSS and Function Compute.
On the OSS side, you are charged for API operation calling and object storage. For more information, see Billing overview.
On the Function Compute side, you are charged for vCPU usage and memory usage based on the execution duration. For example, if a ZIP package decompression task uses a Function Compute instance that has 3 GB of memory and a 2-core vCPU and takes 5 minutes to complete decompression, the amount of the fees charged by Function Compute is 2 × 0.000015 × 300 + 3 × 0.0000015 × 300 = USD 0.01035. For more information, see Billing overview.
You are not charged for traffic used for data transfers between OSS and Function Compute by using an internal endpoint.
Limits
Regions: ZIP package decompression is supported in the following regions: China (Hangzhou), China (Shanghai), China (Qingdao), China (Beijing), China (Zhangjiakou), China (Hohhot), China (Shenzhen), China (Chengdu), China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), UK (London), US (Virginia), US (Silicon Valley) and Saudi Arabia (Riyadh).
Storage classes: A Cold Archive object must be restored before it can be decompressed. An Archive object in a bucket for which real-time access of Archive objects is not enabled must also be restored before it can be decompressed.
Object and directory naming: We recommend that you encode your objects and directories in UTF-8 or GB 2312 to avoid potential problems, such as decompression interruptions and garbled characters in object and directory names after decompression.
Object size and decompression duration: Each object in a ZIP package cannot exceed 1 GB in size. You can increase the decompression duration if necessary. For more information, What do I do if a ZIP package fails to be decompressed due to a decompression timeout?
Configure a ZIP package decompression rule
Log on to the OSS console.
In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.
In the left-side navigation tree, choose .
Click Decompress ZIP Package. In the Decompress ZIP Package panel, configure the parameters described in the following table.
Parameters
Parameter
Required
Description
Service Authorization
Yes
Authorize Function Compute to read data from and write data to OSS and to execute functions.
Click Authorize. Complete authorization on the page that appears.
Authorize Trigger
Yes
Authorize OSS to access Function Compute.
Click Authorize. Complete authorization on the page that appears. If OSS is authorized to access Function Compute, the Trigger Role parameter is displayed instead of the Authorize Trigger parameter.
Prefix
No
Specify the prefix that package names must contain to trigger Function Compute to decompress a ZIP package. If you upload a ZIP package whose name contains the specified prefix or upload a ZIP package to the directory specified by the prefix, Function Compute decompresses the ZIP package. If you do not specify this parameter, Function Compute decompresses all ZIP packages uploaded to the bucket.
ImportantIf you do not specify this parameter, the decompression tasks may be repeatedly executed. Therefore, we recommend that you specify a prefix for each decompression rule. For more information, see How can I avoid trigger loops?
Destination Directory
No
Specify the directory in which the objects extracted from the ZIP package are stored. If you do not specify this parameter, Function Compute decompresses the ZIP package to the root directory of the current bucket.
If you want to store the objects extracted from a ZIP package in a subdirectory that has the same name as the package in the destination directory, select Add the compressed object name to the destination directory.
If you want to store the objects extracted from a ZIP package in the destination directory without retaining the original ZIP package name, select Decompress to the destination directory. For more information about how to configure this parameter, see the examples in the following table.
WarningTo maintain OSS-HDFS availability and prevent data contamination, do not set Destination Directory to
.dlsdata/
when you configure a ZIP package extraction rule for a bucket for which OSS-HDFS is enabled.Decompression configuration examples
Scenario
Configuration method
Storage structure after decompression
Decompress all ZIP packages uploaded to the zipfolder directory to the destfolder directory, without creating subdirectories that have the same names as the ZIP packages.
Set Prefix to zipfolder/.
Set Destination Directory to destfolder.
Select Decompress to the destination directory.
bucket ├─── zipfolder/ │ ├─── a.zip│ └─── b.zip└─── destfolder/ ├─── a.txt ├─── b.txt └─── ...
Decompress all ZIP packages uploaded to the zipfolder directory to the subdirectories that have the same names as the packages in the root directory of the bucket.
Configure the following parameters:
Set Prefix to zipfolder/.
Leave Destination Directory empty.
Select Add the compressed object name to the destination directory.
bucket ├─── zipfolder/ │ ├─── a.zip│ └─── b.zip├─── a/│ ├─── a.txt│ └─── ...└─── b/ ├─── b.txt └─── ...
Decompress all ZIP packages uploaded to the zipfolder directory to the subdirectories that have the same names as the packages in the destfolder directory.
Configure the following parameters:
Set Prefix to zipfolder/.
Set Destination Directory to destfolder.
Select Add the compressed object name to the destination directory.
bucket ├─── zipfolder/ │ ├─── a.zip│ └─── b.zip└─── destfolder/ ├─── a/ │ ├─── a.txt │ └─── ... └─── b/ ├─── b.txt └─── ...
Select I have read the terms of service and agreed to activate Function Compute and process compressed files by using Function Compute. Only file or folder names encoded in UTF-8 or GB 2312 can be processed. and click OK.
Modify a ZIP package decompression rule
You can modify a ZIP package decompression rule based on your business requirements.
Modify the Prefix parameter
On the Decompress ZIP Package page in the OSS console, find the trigger that you want to modify and click Edit in the Actions column.
On the Triggers tab of the function details page, find the trigger and click Modify in the Actions column.
In the Modify Trigger panel, modify the Object Prefix parameter and retain the default settings for other parameters.
Click OK.
Modify function configurations
On the Configurations tab of the function details page, click Modify in the Basic Settings section or the Environment Information section.
Modify the function configurations, such as Memory and Execution Timeout Period.
For more information, see Manage functions.
Delete a trigger
A deleted trigger is no longer available and cannot be restored. If you delete a trigger, an ongoing decompression task under the trigger is not interrupted.
On the Triggers tab of the function details page, find the trigger that you want to delete and click Delete in the Actions column.
In the message that appears, click Delete.
References
Use Function Compute to download multiple objects as a package