All Products
Search
Document Center

Object Storage Service:Resumable upload

Last Updated:Sep 23, 2024

We recommend that you perform resumable upload when uploading a large object whose size is greater than 5 GB in case of network instability or program exceptions. You can split the object into multiple parts and upload the parts in parallel to speed up the process. During resumable upload, the upload progress is recorded in a checkpoint file. If a part fails to be uploaded, the next upload starts from the position that is recorded in the checkpoint file. After all parts are uploaded, all parts are combined into a complete object.

Usage notes

  • In this topic, the public endpoint of the China (Hangzhou) region is used. If you want to access OSS from other Alibaba Cloud services in the same region as OSS, use an internal endpoint. For more information about OSS regions and endpoints, see Regions and endpoints.

  • In this topic, access credentials are obtained from environment variables. For more information about how to configure access credentials, see Configure access credentials.

  • In this topic, an OSSClient instance is created by using an OSS endpoint. If you want to create an OSSClient instance by using custom domain names or Security Token Service (STS), see Initialization.

  • To use resumable upload, you must have the oss:PutObject permission. For more information, see Attach a custom policy to a RAM user.

  • The upload progress is recorded in the checkpoint file. Make sure that you have write permissions on the checkpoint file.

  • The checkpoint file contains a checksum. This checksum cannot be modified. If the checkpoint file is damaged, you must re-upload all parts of the object.

  • If the local file is modified during the upload, you must re-upload all parts of the object.

Implementation method

You can use the Bucket.UploadFile method to perform resumable upload. The following table describes the parameters that you can configure.

Parameter

Description

objectKey

The name of the OSS object. This parameter is equivalent to objectName.

filePath

The path of the local file that you want to upload to OSS.

partSize

The size of each part. Valid values: 100 KB to 5 GB. Default value: 100 KB.

options

The options for the upload. Valid values:

  • Routines: specifies the number of parts that can be uploaded in parallel. Default value: 1. The value 1 indicates that concurrent upload is not used.

  • Checkpoint: specifies whether resumable upload is enabled and whether the checkpoint file is configured. By default, resumable upload is disabled.

    For example, you can use oss.Checkpoint(true, "") to enable resumable upload and set the file.cp file which is in the same directory as the local file to the checkpoint file. file indicates the name of the local file. You can also use oss.Checkpoint(true, "your-cp-file.cp") to specify the checkpoint file.

Note

For more information, see Manage object metadata.

Examples

The following sample code provides an example on how to perform resumable upload:

package main

import (
    "fmt"
    "os"
    "github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
    // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
    provider, err := oss.NewEnvironmentVariableCredentialsProvider()
    if err != nil {
        log.Fatalf("Failed to create credentials provider: %v", err)
    }

    // Create an OSSClient instance. 
    // Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
    client, err := oss.New("yourEndpoint", "", "", oss.SetCredentialsProvider(&provider))
    if err != nil {
        log.Fatalf("Failed to create OSS client: %v", err)
    }

    // Specify the name of the bucket. Example: examplebucket. 
    bucket, err := client.Bucket("examplebucket")
    if err != nil {
        log.Fatalf("Failed to get bucket: %v", err)
    }

     // When you use UploadFile to perform resumable upload, the number of parts cannot exceed 10000. 
    // Specify the size of each part based on the size of the object that you want to upload. The size of each part ranges from 100 KB to 5 GB. Default value: 100 KB (100 x 1024). 
    // Use oss.Routines to set the number of parts that can be uploaded in parallel to 3. 
    // Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. 
    // Specify the full path of the local file. Example: D:\\localpath\\examplefile.txt. By default, if you do not specify the path of the local file, the file is uploaded from the path of the project to which the sample program belongs. 
    err = bucket.UploadFile("exampledir/exampleobject.txt", "D:\\localpath\\examplefile.txt", 100*1024, oss.Routines(3), oss.Checkpoint(true, ""))
    if err != nil {
        log.Fatalf("Failed to upload file: %v", err)
    }

    log.Println("File uploaded successfully.")
} 

FAQ

What do I do if "Too many parts, Please increase part size." is reported when I perform resumable upload?

References

  • For the complete sample code that is used to perform resumable upload, visit GitHub.

  • For more information about the API operation that you can call to perform resumable upload, see UploadFile.