All Products
Search
Document Center

Object Storage Service:ZIP package decompression

Last Updated:Nov 15, 2024

If you want to upload multiple objects at a time, upload objects with the original directory structure retained, upload a complete list of objects, or assign resources to objects, you can configure a ZIP package decompression rule and upload ZIP objects to the specified directory in an Object Storage Service (OSS) bucket. Function Compute automatically decompresses ZIP objects based on the decompression rule and returns the decompressed data to the specified directory in OSS.

Prerequisites

Function Compute is activated. If Function Compute is not activated, activate it on the Function Compute page.

Scenarios

  • Batch upload: Uploading a large number of small objects can be time-consuming. To improve upload efficiency, you can configure a ZIP package decompression rule and upload a ZIP package of objects.

  • Resource completeness: ZIP package decompression is useful in cases where you may want to combine multiple files into a complete resource for better upload efficiency, instead of uploading them separately.

  • Directory structure preservation: A static website can contain a large number of static resources and have a relatively deep directory structure. It takes a lot of time to create individual directories in OSS and upload resources to these directories. In this case, you can configure a ZIP package decompression rule, locally create a ZIP package that has an intended directory structure, and upload the ZIP package to OSS.

  • Content delivery: If you need to deliver a large number of objects to different users or servers, you can compress the objects into a ZIP package and use ZIP package decompression to decompress the ZIP package to specified directories when it is uploaded to OSS. This reduces transmission time and bandwidth consumption.

How it works

OSS uses Function Compute to decompress ZIP packages uploaded to OSS. The following figure shows the decompression process.

image
  1. A ZIP package is uploaded to a directory that is specified by the Prefix parameter in a ZIP package decompression rule.

  2. The Compute Function trigger is used.

    You need to authorize the trigger when you configure a ZIP package decompression rule so that OSS can use the trigger role AliyunOSSEventNotificationRole to access Function Compute. The permissions to access Function Compute are granted to the role by default when you authorize a trigger role.

  3. Function Compute decompresses the ZIP package and saves the decompressed data to the specified OSS directory.

    You must authorize Function Compute to access OSS when you configure a ZIP package decompression rule. During authorization, you must create a role that allows Function Compute to read objects from an OSS bucket and to write decompressed data to the bucket. When you create such a role, the read and write permissions on the bucket are granted to the role.

Billing

ZIP package decompression is a value-added feature that incurs fees in OSS and Function Compute.

  • On the OSS side, you are charged for API operation calling and object storage. For more information, see Billing overview.

  • On the Function Compute side, you are charged for vCPU usage and memory usage based on the execution duration. For example, if a ZIP package decompression task uses a Function Compute instance that has 3 GB of memory and a 2-core vCPU and takes 5 minutes to complete decompression, the amount of the fees charged by Function Compute is 2 × 0.000015 × 300 + 3 × 0.0000015 × 300 = USD 0.01035. For more information, see Billing overview.

You are not charged for traffic used for data transfers between OSS and Function Compute by using an internal endpoint.

Limits

  • Regions: ZIP package decompression is supported in the following regions: China (Hangzhou), China (Shanghai), China (Qingdao), China (Beijing), China (Zhangjiakou), China (Hohhot), China (Shenzhen), China (Chengdu), China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), UK (London), US (Virginia), US (Silicon Valley) and Saudi Arabia (Riyadh).

  • Storage classes: A Cold Archive object must be restored before it can be decompressed. An Archive object in a bucket for which real-time access of Archive objects is not enabled must also be restored before it can be decompressed.

  • Object and directory naming: We recommend that you encode your objects and directories in UTF-8 or GB 2312 to avoid potential problems, such as decompression interruptions and garbled characters in object and directory names after decompression.

  • Object size and decompression duration: Each object in a ZIP package cannot exceed 1 GB in size. You can increase the decompression duration if necessary. For more information, What do I do if a ZIP package fails to be decompressed due to a decompression timeout?

Configure a ZIP package decompression rule

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

  3. In the left-side navigation tree, choose Data Processing > Decompress ZIP Package.

  4. Click Decompress ZIP Package. In the Decompress ZIP Package panel, configure the parameters described in the following table.

    Parameters

    Parameter

    Required

    Description

    Service Authorization

    Yes

    Authorize Function Compute to read data from and write data to OSS and to execute functions.

    Click Authorize. Complete authorization on the page that appears.

    Authorize Trigger

    Yes

    Authorize OSS to access Function Compute.

    Click Authorize. Complete authorization on the page that appears. If OSS is authorized to access Function Compute, the Trigger Role parameter is displayed instead of the Authorize Trigger parameter.

    Prefix

    No

    Specify the prefix that package names must contain to trigger Function Compute to decompress a ZIP package. If you upload a ZIP package whose name contains the specified prefix or upload a ZIP package to the directory specified by the prefix, Function Compute decompresses the ZIP package. If you do not specify this parameter, Function Compute decompresses all ZIP packages uploaded to the bucket.

    Important

    If you do not specify this parameter, the decompression tasks may be repeatedly executed. Therefore, we recommend that you specify a prefix for each decompression rule. For more information, see How can I avoid trigger loops?

    Destination Directory

    No

    Specify the directory in which the objects extracted from the ZIP package are stored. If you do not specify this parameter, Function Compute decompresses the ZIP package to the root directory of the current bucket.

    • If you want to store the objects extracted from a ZIP package in a subdirectory that has the same name as the package in the destination directory, select Add the compressed object name to the destination directory.

    • If you want to store the objects extracted from a ZIP package in the destination directory without retaining the original ZIP package name, select Decompress to the destination directory. For more information about how to configure this parameter, see the examples in the following table.

    Warning

    To maintain OSS-HDFS availability and prevent data contamination, do not set Destination Directory to .dlsdata/ when you configure a ZIP package extraction rule for a bucket for which OSS-HDFS is enabled.

    Decompression configuration examples

    Scenario

    Configuration method

    Storage structure after decompression

    Decompress all ZIP packages uploaded to the zipfolder directory to the destfolder directory, without creating subdirectories that have the same names as the ZIP packages.

    • Set Prefix to zipfolder/.

    • Set Destination Directory to destfolder.

    • Select Decompress to the destination directory.

    bucket  ├─── zipfolder/   │    ├─── a.zip│    └─── b.zip└─── destfolder/     ├─── a.txt     ├─── b.txt     └─── ... 

    Decompress all ZIP packages uploaded to the zipfolder directory to the subdirectories that have the same names as the packages in the root directory of the bucket.

    Configure the following parameters:

    • Set Prefix to zipfolder/.

    • Leave Destination Directory empty.

    • Select Add the compressed object name to the destination directory.

    bucket  ├─── zipfolder/   │    ├─── a.zip│    └─── b.zip├─── a/│    ├─── a.txt│    └─── ...└─── b/     ├─── b.txt     └─── ...

    Decompress all ZIP packages uploaded to the zipfolder directory to the subdirectories that have the same names as the packages in the destfolder directory.

    Configure the following parameters:

    • Set Prefix to zipfolder/.

    • Set Destination Directory to destfolder.

    • Select Add the compressed object name to the destination directory.

    bucket  ├─── zipfolder/   │    ├─── a.zip│    └─── b.zip└─── destfolder/     ├─── a/     │    ├─── a.txt     │    └─── ...     └─── b/          ├─── b.txt          └─── ...
  5. Select I have read the terms of service and agreed to activate Function Compute and process compressed files by using Function Compute. Only file or folder names encoded in UTF-8 or GB 2312 can be processed. and click OK.

Modify a ZIP package decompression rule

You can modify a ZIP package decompression rule based on your business requirements.

Modify the Prefix parameter

  1. On the Decompress ZIP Package page in the OSS console, find the trigger that you want to modify and click Edit in the Actions column.

  2. On the Triggers tab of the function details page, find the trigger and click Modify in the Actions column.

  3. In the Modify Trigger panel, modify the Object Prefix parameter and retain the default settings for other parameters.

  4. Click OK.

Modify function configurations

  1. On the Configurations tab of the function details page, click Modify in the Basic Settings section or the Environment Information section.

  2. Modify the function configurations, such as Memory and Execution Timeout Period.

    For more information, see Manage functions.

Delete a trigger

Note

A deleted trigger is no longer available and cannot be restored. If you delete a trigger, an ongoing decompression task under the trigger is not interrupted.

  1. On the Triggers tab of the function details page, find the trigger that you want to delete and click Delete in the Actions column.

  2. In the message that appears, click Delete.

References

Use Function Compute to download multiple objects as a package

FAQ

What do I do if a ZIP package upload fails to trigger a Function Compute decompression task?

ZIP package decompression involves interactions between OSS and Function Compute. A ZIP package upload that matches an existing decompression rule triggers the decompression function in Function Compute to decompress the ZIP package. OSS may fail to invoke the decompression function. You can check whether decompression is triggered by checking the Base64-encoded value of the x-oss-event-status response header. If the decoded value of the x-oss-event-status header is {"Result": "Ok"}, decompression is triggered. If the decoded value of the x-oss-event-status header is not {"Result": "Ok"}, decompression fails to be triggered. In this case, we recommend that you re-upload the ZIP package. For more information, see Simple upload.

Note

Each object in a ZIP package cannot exceed 1 GB in size. The default maximum decompression duration for a ZIP package is 2 hours.

What do I do if a ZIP package fails to be decompressed due to a decompression timeout?

By default, the maximum decompression duration for a ZIP package is 2 hours. You can increase the maximum decompression duration to 24 hours. If a ZIP package requires more than 2 hours to be decompressed, perform the following steps to modify the maximum decompression duration.

  1. On the Decompress ZIP Package page in the OSS console, find the trigger that you want to modify and click Edit in the Actions column.

  2. On the function details page, click the Configurations tab.

  3. Click Modify next to Environment Information.

  4. Modify the value of the Execution Timeout Period field and click OK.

How do I decompress a package that contains an object larger than 1 GB in size?

If an object in a ZIP package exceeds 1 GB in size, you can store the package in File Storage NAS and decompress the package to a directory in File Storage NAS. This solution incurs a storage fee on File Storage NAS. For more information, visit the unzip-oss-nas page on GitHub.

Does Function Compute decompress a ZIP package that is within another ZIP package?

No, Function Compute does not decompress a ZIP package within another ZIP package.

If you configure a ZIP package decompression rule for a bucket, a ZIP package uploaded to the bucket triggers Function Compute to decompress the ZIP package only once. If the ZIP package contains another ZIP package, the inner ZIP package is not decompressed. You can upload the inner ZIP package separately to the bucket to trigger another decompression task.

Does ZIP package decompression allow data to be decompressed to a different bucket?

No, ZIP package decompression allows a ZIP package to be decompressed to a directory only within the bucket to which the ZIP package is uploaded. To decompress a ZIP package to a different bucket, you need to perform custom development in Function Compute.

Do extracted objects during decompression of a ZIP package appear in the destination directory synchronously?

Yes, extracted objects from an ongoing decompression task appear in the destination directory synchronously.

Function Compute uploads decompressed objects from a ZIP package to OSS while it decompresses the remaining data in the package.

How do I check whether a decompression task is complete?

You can view the function invocation logs to check if a decompression is complete.

  1. On the Decompress ZIP Package page in the OSS console, find the trigger and click Edit in the Actions column.

  2. On the function details page, click the Logs tab.

  3. Click Enable.

  4. Attach the AliyunLogFullAccess policy to the RAM role. For more information, see Grant permissions to a RAM role.

  5. If a decompression task is triggered on a ZIP package upload, view the log information on the Logs tab to check if decompression is complete.

    If the log contains "FC Invoke End", the decompression task is complete.

    image.png

Does ZIP package decompression support .rar and .tar.gz files?

No, ZIP package decompression does not support .rar and .tar.gz files.

ZIP package decompression supports only ZIP files.

Is a notification sent after completion of decompression?

No.

You can check if a decompression task is complete by checking logs on the Logs tab of the function details page or configuring a destination for an asynchronous invocation of the decompression function. For more information, see Configure a destination for an asynchronous invocation.

Does OSS support online compression?

No, OSS does not support online compression.

You can use Function Compute to download objects from OSS to your local device as a package. For more information, see Use Function Compute to package and download OSS objects.

Can I use ZIP package decompression to decompress multipart ZIP packages?

No, you cannot use ZIP package decompression to decompress multipart ZIP packages.

ZIP package decompression does not merge content from multipart ZIP packages. Therefore, the feature cannot decompress multipart ZIP packages.

Why is the Decompress ZIP Package button grayed out?

  • Function Compute is not activated. In this case, activate Function Compute on the Function Compute page.

  • ZIP package decompression is not supported in the region in which your bucket resides. For more information, see Limits.