If you use simple upload to upload an object whose size is larger than 5 GB, a long period of time may be required and the object may fail to be uploaded due to network interruptions or program exceptions. In this case, you can split the object into multiple parts and upload the parts in parallel to accelerate the upload. If a specific part fails to be uploaded, you need to only re-upload the part.
Scenarios
Accelerated upload of large objects: If you want to upload an object whose size is larger than 5 GB, you can use multipart upload to split the object into multiple parts and upload the parts in parallel to accelerate the upload.
Network jitters: Multipart upload is suitable for scenarios in which the network condition is poor. If a specific part fails to be uploaded, you need to only re-upload the part. This saves time and bandwidth.
Upload pause and resumption: Multipart upload tasks do not expire. You can pause and resume a multipart upload task at any time before the task is complete or canceled.
Unknown object size: In scenarios such as video surveillance, the final object sizes may be unknown. In this case, you can use multipart upload to upload the objects.
Process
 | To upload a local file by using multipart upload, perform the following steps: Call the InitiateMultipartUpload operation to initiate a multipart upload task. Call the UploadPart operation to upload parts. The parts of the object that you want to upload are sorted based on the part numbers that you specify during the upload. However, parts do not need to be uploaded in order and can be uploaded in parallel. More parts uploaded in parallel do not necessarily accelerate the upload. We recommend that you specify the number of parallel uploads based on your network conditions and the workload of your devices. By default, if you upload the parts of an object but do not call the CompleteMultipartUpload operation to combine the parts into an object, the uploaded parts are not automatically deleted. To cancel the upload task and delete the parts, call the AbortMultipartUpload operation.
Call the CompleteMultipartUpload operation to combine the uploaded parts into an object.
|
Methods
Note
You cannot perform multipart upload in the OSS console. If you want to upload an object whose size is larger than 5 GB, use ossbrowser, OSS SDKs, or ossutil.
Use ossutil
If you run the cp command provided by ossutil 2.0 to upload a large local file, ossutil automatically uses multipart upload to upload the local file.
ossutil cp D:/localpath/example.iso oss://examplebucket/desfolder/
To manually upload a local file by using multipart upload, you can use the initiate-multipart-upload, upload-part, and complete-multipart-upload operations together.
Use OSS SDKs
Java
Python
Go
Node.js
PHP
C#
Browser.js
Android
C++
Object C
C
The following sample code provides an example on how to implement multipart upload by using OSS SDK for Java:
import com.aliyun.oss.*;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.common.comm.SignVersion;
import com.aliyun.oss.internal.Mimetypes;
import com.aliyun.oss.model.*;
import java.io.File;
import java.io.FileInputStream;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
public class Demo {
public static void main(String[] args) throws Exception {
String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
String bucketName = "examplebucket";
String objectName = "exampledir/exampleobject.txt";
String filePath = "D:\\localpath\\examplefile.txt";
String region = "cn-hangzhou";
ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);
OSS ossClient = OSSClientBuilder.create()
.endpoint(endpoint)
.credentialsProvider(credentialsProvider)
.clientConfiguration(clientBuilderConfiguration)
.region(region)
.build();
try {
InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, objectName);
ObjectMetadata metadata = new ObjectMetadata();
if (metadata.getContentType() == null) {
metadata.setContentType(Mimetypes.getInstance().getMimetype(new File(filePath), objectName));
}
System.out.println("Content-Type: " + metadata.getContentType());
request.setObjectMetadata(metadata);
InitiateMultipartUploadResult upresult = ossClient.initiateMultipartUpload(request);
String uploadId = upresult.getUploadId();
List<PartETag> partETags = new ArrayList<PartETag>();
final long partSize = 1 * 1024 * 1024L;
final File sampleFile = new File(filePath);
long fileLength = sampleFile.length();
int partCount = (int) (fileLength / partSize);
if (fileLength % partSize != 0) {
partCount++;
}
for (int i = 0; i < partCount; i++) {
long startPos = i * partSize;
long curPartSize = (i + 1 == partCount) ? (fileLength - startPos) : partSize;
UploadPartRequest uploadPartRequest = new UploadPartRequest();
uploadPartRequest.setBucketName(bucketName);
uploadPartRequest.setKey(objectName);
uploadPartRequest.setUploadId(uploadId);
InputStream instream = new FileInputStream(sampleFile);
instream.skip(startPos);
uploadPartRequest.setInputStream(instream);
uploadPartRequest.setPartSize(curPartSize);
uploadPartRequest.setPartNumber(i + 1);
UploadPartResult uploadPartResult = ossClient.uploadPart(uploadPartRequest);
partETags.add(uploadPartResult.getPartETag());
instream.close();
}
CompleteMultipartUploadRequest completeMultipartUploadRequest =
new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
CompleteMultipartUploadResult completeMultipartUploadResult = ossClient.completeMultipartUpload(completeMultipartUploadRequest);
System.out.println("Upload successful,ETag:" + completeMultipartUploadResult.getETag());
} catch (OSSException oe) {
System.out.println("Caught an OSSException, which means your request made it to OSS, "
+ "but was rejected with an error response for some reason.");
System.out.println("Error Message:" + oe.getErrorMessage());
System.out.println("Error Code:" + oe.getErrorCode());
System.out.println("Request ID:" + oe.getRequestId());
System.out.println("Host ID:" + oe.getHostId());
} catch (ClientException ce) {
System.out.println("Caught a ClientException, which means the client encountered "
+ "a serious internal problem while trying to communicate with OSS, "
+ "such as not being able to access the network.");
System.out.println("Error Message:" + ce.getMessage());
} finally {
if (ossClient != null) {
ossClient.shutdown();
}
}
}
}
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for Python:
import os
from oss2 import SizedFileAdapter, determine_part_size
from oss2.models import PartInfo
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
region = "cn-hangzhou"
bucket = oss2.Bucket(auth, endpoint, "yourBucketName", region=region)
key = 'exampledir/exampleobject.txt'
filename = 'D:\\localpath\\examplefile.txt'
total_size = os.path.getsize(filename)
part_size = determine_part_size(total_size, preferred_size=100 * 1024)
upload_id = bucket.init_multipart_upload(key).upload_id
parts = []
with open(filename, 'rb') as fileobj:
part_number = 1
offset = 0
while offset < total_size:
num_to_upload = min(part_size, total_size - offset)
result = bucket.upload_part(key, upload_id, part_number,
SizedFileAdapter(fileobj, num_to_upload))
parts.append(PartInfo(part_number, result.etag))
offset += num_to_upload
part_number += 1
headers = dict()
bucket.complete_multipart_upload(key, upload_id, parts, headers=headers)
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for Go:
package main
import (
"bufio"
"bytes"
"context"
"flag"
"io"
"log"
"os"
"sync"
"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss"
"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/credentials"
)
var (
region string
bucketName string
objectName string
)
func init() {
flag.StringVar(®ion, "region", "", "The region in which the bucket is located.")
flag.StringVar(&bucketName, "bucket", "", "The name of the source bucket.")
flag.StringVar(&objectName, "object", "", "The name of the source object.")
}
func main() {
flag.Parse()
var uploadId string
if len(bucketName) == 0 {
flag.PrintDefaults()
log.Fatalf("invalid parameters, source bucket name required")
}
if len(region) == 0 {
flag.PrintDefaults()
log.Fatalf("invalid parameters, region required")
}
if len(objectName) == 0 {
flag.PrintDefaults()
log.Fatalf("invalid parameters, source object name required")
}
cfg := oss.LoadDefaultConfig().
WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).
WithRegion(region)
client := oss.NewClient(cfg)
initRequest := &oss.InitiateMultipartUploadRequest{
Bucket: oss.Ptr(bucketName),
Key: oss.Ptr(objectName),
}
initResult, err := client.InitiateMultipartUpload(context.TODO(), initRequest)
if err != nil {
log.Fatalf("failed to initiate multipart upload %v", err)
}
log.Printf("initiate multipart upload result:%#v\n", *initResult.UploadId)
uploadId = *initResult.UploadId
var wg sync.WaitGroup
var parts []oss.UploadPart
count := 3
var mu sync.Mutex
file, err := os.Open("yourLocalFile")
if err != nil {
log.Fatalf("failed to open local file %v", err)
}
defer file.Close()
bufReader := bufio.NewReader(file)
content, err := io.ReadAll(bufReader)
if err != nil {
log.Fatalf("failed to read local file %v", err)
}
log.Printf("file size: %d\n", len(content))
chunkSize := len(content) / count
if chunkSize == 0 {
chunkSize = 1
}
for i := 0; i < count; i++ {
start := i * chunkSize
end := start + chunkSize
if i == count-1 {
end = len(content)
}
wg.Add(1)
go func(partNumber int, start, end int) {
defer wg.Done()
partRequest := &oss.UploadPartRequest{
Bucket: oss.Ptr(bucketName),
Key: oss.Ptr(objectName),
PartNumber: int32(partNumber),
UploadId: oss.Ptr(uploadId),
Body: bytes.NewReader(content[start:end]),
}
partResult, err := client.UploadPart(context.TODO(), partRequest)
if err != nil {
log.Fatalf("failed to upload part %d: %v", partNumber, err)
}
part := oss.UploadPart{
PartNumber: partRequest.PartNumber,
ETag: partResult.ETag,
}
mu.Lock()
parts = append(parts, part)
mu.Unlock()
}(i+1, start, end)
}
wg.Wait()
request := &oss.CompleteMultipartUploadRequest{
Bucket: oss.Ptr(bucketName),
Key: oss.Ptr(objectName),
UploadId: oss.Ptr(uploadId),
CompleteMultipartUpload: &oss.CompleteMultipartUpload{
Parts: parts,
},
}
result, err := client.CompleteMultipartUpload(context.TODO(), request)
if err != nil {
log.Fatalf("failed to complete multipart upload %v", err)
}
log.Printf("complete multipart upload result:%#v\n", result)
}
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for Node.js:
const OSS = require('ali-oss');
const path = require("path");
const client = new OSS({
region: 'yourregion',
accessKeyId: process.env.OSS_ACCESS_KEY_ID,
accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
authorizationV4: true,
bucket: 'yourbucketname',
});
const progress = (p, _checkpoint) => {
console.log(p);
console.log(_checkpoint);
};
const headers = {
'x-oss-storage-class': 'Standard',
'x-oss-tagging': 'Tag1=1&Tag2=2',
'x-oss-forbid-overwrite': 'true'
}
async function multipartUpload() {
try {
const result = await client.multipartUpload('exampledir/exampleobject.txt', path.normalize('D:\\localpath\\examplefile.txt'), {
progress,
meta: {
year: 2020,
people: 'test',
},
});
console.log(result);
const head = await client.head('exampledir/exampleobject.txt');
console.log(head);
} catch (e) {
if (e.code === 'ConnectionTimeoutError') {
console.log('TimeoutError');
}
console.log(e);
}
}
multipartUpload();
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for PHP:
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
require_once __DIR__ . '/../vendor/autoload.php';
}
use OSS\Credentials\EnvironmentVariableCredentialsProvider;
use OSS\OssClient;
use OSS\CoreOssException;
use OSS\Core\OssUtil;
$provider = new EnvironmentVariableCredentialsProvider();
$endpoint = 'https://oss-cn-hangzhou.aliyuncs.com';
$bucket= 'examplebucket';
$object = 'exampledir/exampleobject.txt';
$uploadFile = 'D:\\localpath\\examplefile.txt';
$initOptions = array(
OssClient::OSS_HEADERS => array(
),
);
try{
$config = array(
"provider" => $provider,
"endpoint" => $endpoint,
"signatureVersion" => OssClient::OSS_SIGNATURE_VERSION_V4,
"region"=> "cn-hangzhou"
);
$ossClient = new OssClient($config);
$uploadId = $ossClient->initiateMultipartUpload($bucket, $object, $initOptions);
print("initiateMultipartUpload OK" . "\n");
} catch(OssException $e) {
printf($e->getMessage() . "\n");
return;
}
$partSize = 10 * 1024 * 1024;
$uploadFileSize = sprintf('%u',filesize($uploadFile));
$pieces = $ossClient->generateMultiuploadParts($uploadFileSize, $partSize);
$responseUploadPart = array();
$uploadPosition = 0;
$isCheckMd5 = true;
foreach ($pieces as $i => $piece) {
$fromPos = $uploadPosition + (integer)$piece[$ossClient::OSS_SEEK_TO];
$toPos = (integer)$piece[$ossClient::OSS_LENGTH] + $fromPos - 1;
$upOptions = array(
$ossClient::OSS_FILE_UPLOAD => $uploadFile,
$ossClient::OSS_PART_NUM => ($i + 1),
$ossClient::OSS_SEEK_TO => $fromPos,
$ossClient::OSS_LENGTH => $toPos - $fromPos + 1,
$ossClient::OSS_CHECK_MD5 => $isCheckMd5,
);
if ($isCheckMd5) {
$contentMd5 = OssUtil::getMd5SumForFile($uploadFile, $fromPos, $toPos);
$upOptions[$ossClient::OSS_CONTENT_MD5] = $contentMd5;
}
try {
$responseUploadPart[] = $ossClient->uploadPart($bucket, $object, $uploadId, $upOptions);
printf("initiateMultipartUpload, uploadPart - part#{$i} OK\n");
} catch(OssException $e) {
printf("initiateMultipartUpload, uploadPart - part#{$i} FAILED\n");
printf($e->getMessage() . "\n");
return;
}
}
$uploadParts = array();
foreach ($responseUploadPart as $i => $eTag) {
$uploadParts[] = array(
'PartNumber' => ($i + 1),
'ETag' => $eTag,
);
}
$comOptions['headers'] = array(
);
try {
$ossClient->completeMultipartUpload($bucket, $object, $uploadId, $uploadParts,$comOptions);
printf( "Complete Multipart Upload OK\n");
} catch(OssException $e) {
printf("Complete Multipart Upload FAILED\n");
printf($e->getMessage() . "\n");
return;
}
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for C#:
using Aliyun.OSS;
using Aliyun.OSS.Common;
var endpoint = "yourEndpoint";
var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID");
var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET");
var bucketName = "examplebucket";
var objectName = "exampleobject.txt";
var localFilename = "D:\\localpath\\examplefile.txt";
const string region = "cn-hangzhou";
var conf = new ClientConfiguration();
conf.SignatureVersion = SignatureVersion.V4;
var client = new OssClient(endpoint, accessKeyId, accessKeySecret, conf);
client.SetRegion(region);
var uploadId = "";
try
{
var request = new InitiateMultipartUploadRequest(bucketName, objectName);
var result = client.InitiateMultipartUpload(request);
uploadId = result.UploadId;
Console.WriteLine("Init multi part upload succeeded");
Console.WriteLine("Upload Id:{0}", result.UploadId);
}
catch (Exception ex)
{
Console.WriteLine("Init multi part upload failed, {0}", ex.Message);
Environment.Exit(1);
}
var partSize = 100 * 1024;
var fi = new FileInfo(localFilename);
var fileSize = fi.Length;
var partCount = fileSize / partSize;
if (fileSize % partSize != 0)
{
partCount++;
}
var partETags = new List<PartETag>();
try
{
using (var fs = File.Open(localFilename, FileMode.Open))
{
for (var i = 0; i < partCount; i++)
{
var skipBytes = (long)partSize * i;
fs.Seek(skipBytes, 0);
var size = (partSize < fileSize - skipBytes) ? partSize : (fileSize - skipBytes);
var request = new UploadPartRequest(bucketName, objectName, uploadId)
{
InputStream = fs,
PartSize = size,
PartNumber = i + 1
};
var result = client.UploadPart(request);
partETags.Add(result.PartETag);
Console.WriteLine("finish {0}/{1}", partETags.Count, partCount);
}
Console.WriteLine("Put multi part upload succeeded");
}
}
catch (Exception ex)
{
Console.WriteLine("Put multi part upload failed, {0}", ex.Message);
Environment.Exit(1);
}
try
{
var completeMultipartUploadRequest = new CompleteMultipartUploadRequest(bucketName, objectName, uploadId);
foreach (var partETag in partETags)
{
completeMultipartUploadRequest.PartETags.Add(partETag);
}
var result = client.CompleteMultipartUpload(completeMultipartUploadRequest);
Console.WriteLine("complete multi part succeeded");
}
catch (Exception ex)
{
Console.WriteLine("complete multi part failed, {0}", ex.Message);
Environment.Exit(1);
}
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for Browser.js:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Document</title>
</head>
<body>
<button id="submit">Upload</button>
<input id="file" type="file" />
<script
type="text/javascript"
src="https://gosspublic.alicdn.com/aliyun-oss-sdk-6.18.0.min.js"
></script>
<script type="text/javascript">
const client = new OSS({
region: "yourRegion",
authorizationV4: true,
accessKeyId: "yourAccessKeyId",
accessKeySecret: "yourAccessKeySecret",
stsToken: "yourSecurityToken",
bucket: "examplebucket",
});
const headers = {
"Cache-Control": "no-cache",
"Content-Disposition": "example.txt",
Expires: "1000",
"x-oss-storage-class": "Standard",
"x-oss-tagging": "Tag1=1&Tag2=2",
"x-oss-forbid-overwrite": "true",
};
const name = "exampleobject.txt";
const submit = document.getElementById("submit");
const options = {
progress: (p, cpt, res) => {
console.log(p);
},
parallel: 4,
partSize: 1024 * 1024,
meta: { year: 2020, people: "test" },
mime: "text/plain",
};
submit.addEventListener("click", async () => {
try {
const data = document.getElementById("file").files[0];
const res = await client.multipartUpload(name, data, {
...options,
callback: {
url: "http://examplebucket.aliyuncs.com:23450",
host: "yourHost",
body: "bucket=${bucket}&object=${object}&var1=${x:var1}",
contentType: "application/x-www-form-urlencoded",
customValue: {
var1: "value1",
var2: "value2",
},
},
});
console.log(res);
} catch (err) {
console.log(err);
}
});
</script>
</body>
</html>
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for Android:
String bucketName = "examplebucket";
String objectName = "exampledir/exampleobject.txt";
String localFilepath = "/storage/emulated/0/oss/examplefile.txt";
InitiateMultipartUploadRequest init = new InitiateMultipartUploadRequest(bucketName, objectName);
InitiateMultipartUploadResult initResult = oss.initMultipartUpload(init);
String uploadId = initResult.getUploadId();
int partCount = 100 * 1024;
List<PartETag> partETags = new ArrayList<>();
for (int i = 1; i < 5; i++) {
byte[] data = new byte[partCount];
RandomAccessFile raf = new RandomAccessFile(localFilepath, "r");
long skip = (i-1) * partCount;
raf.seek(skip);
raf.readFully(data, 0, partCount);
UploadPartRequest uploadPart = new UploadPartRequest();
uploadPart.setBucketName(bucketName);
uploadPart.setObjectKey(objectName);
uploadPart.setUploadId(uploadId);
uploadPart.setPartNumber(i);
uploadPart.setPartContent(data);
try {
UploadPartResult result = oss.uploadPart(uploadPart);
PartETag partETag = new PartETag(uploadPart.getPartNumber(), result.getETag());
partETags.add(partETag);
} catch (ServiceException serviceException) {
OSSLog.logError(serviceException.getErrorCode());
}
}
Collections.sort(partETags, new Comparator<PartETag>() {
@Override
public int compare(PartETag lhs, PartETag rhs) {
if (lhs.getPartNumber() < rhs.getPartNumber()) {
return -1;
} else if (lhs.getPartNumber() > rhs.getPartNumber()) {
return 1;
} else {
return 0;
}
}
});
CompleteMultipartUploadRequest complete = new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
complete.setCallbackParam(new HashMap<String, String>() {
{
put("callbackUrl", CALLBACK_SERVER);
put("callbackBody", "test");
}
});
CompleteMultipartUploadResult completeResult = oss.completeMultipartUpload(complete);
OSSLog.logError("-------------- serverCallback: " + completeResult.getServerCallbackReturnBody());
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for C++:
#include <alibabacloud/oss/OssClient.h>
#include <fstream>
int64_t getFileSize(const std::string& file)
{
std::fstream f(file, std::ios::in | std::ios::binary);
f.seekg(0, f.end);
int64_t size = f.tellg();
f.close();
return size;
}
using namespace AlibabaCloud::OSS;
int main(void)
{
std::string Endpoint = "yourEndpoint";
std::string Region = "yourRegion";
std::string BucketName = "examplebucket";
std::string ObjectName = "exampledir/exampleobject.txt";
InitializeSdk();
ClientConfiguration conf;
conf.signatureVersion = SignatureVersionType::V4;
auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
OssClient client(Endpoint, credentialsProvider, conf);
client.SetRegion(Region);
InitiateMultipartUploadRequest initUploadRequest(BucketName, ObjectName);
auto uploadIdResult = client.InitiateMultipartUpload(initUploadRequest);
auto uploadId = uploadIdResult.result().UploadId();
std::string fileToUpload = "yourLocalFilename";
int64_t partSize = 100 * 1024;
PartList partETagList;
auto fileSize = getFileSize(fileToUpload);
int partCount = static_cast<int>(fileSize / partSize);
if (fileSize % partSize != 0) {
partCount++;
}
for (int i = 1; i <= partCount; i++) {
auto skipBytes = partSize * (i - 1);
auto size = (partSize < fileSize - skipBytes) ? partSize : (fileSize - skipBytes);
std::shared_ptr<std::iostream> content = std::make_shared<std::fstream>(fileToUpload, std::ios::in|std::ios::binary);
content->seekg(skipBytes, std::ios::beg);
UploadPartRequest uploadPartRequest(BucketName, ObjectName, content);
uploadPartRequest.setContentLength(size);
uploadPartRequest.setUploadId(uploadId);
uploadPartRequest.setPartNumber(i);
auto uploadPartOutcome = client.UploadPart(uploadPartRequest);
if (uploadPartOutcome.isSuccess()) {
Part part(i, uploadPartOutcome.result().ETag());
partETagList.push_back(part);
}
else {
std::cout << "uploadPart fail" <<
",code:" << uploadPartOutcome.error().Code() <<
",message:" << uploadPartOutcome.error().Message() <<
",requestId:" << uploadPartOutcome.error().RequestId() << std::endl;
}
}
CompleteMultipartUploadRequest request(BucketName, ObjectName);
request.setUploadId(uploadId);
request.setPartList(partETagList);
auto outcome = client.CompleteMultipartUpload(request);
if (!outcome.isSuccess()) {
std::cout << "CompleteMultipartUpload fail" <<
",code:" << outcome.error().Code() <<
",message:" << outcome.error().Message() <<
",requestId:" << outcome.error().RequestId() << std::endl;
return -1;
}
ShutdownSdk();
return 0;
}
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for Object C:
__block NSString * uploadId = nil;
__block NSMutableArray * partInfos = [NSMutableArray new];
NSString * uploadToBucket = @"examplebucket";
NSString * uploadObjectkey = @"exampledir/exampleobject.txt";
OSSInitMultipartUploadRequest * init = [OSSInitMultipartUploadRequest new];
init.bucketName = uploadToBucket;
init.objectKey = uploadObjectkey;
OSSTask * initTask = [client multipartUploadInit:init];
[initTask waitUntilFinished];
if (!initTask.error) {
OSSInitMultipartUploadResult * result = initTask.result;
uploadId = result.uploadId;
} else {
NSLog(@"multipart upload failed, error: %@", initTask.error);
return;
}
NSString * filePath = @"<filepath>";
uint64_t fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:filePath error:nil] fileSize];
int chuckCount = 10;
uint64_t offset = fileSize/chuckCount;
for (int i = 1; i <= chuckCount; i++) {
OSSUploadPartRequest * uploadPart = [OSSUploadPartRequest new];
uploadPart.bucketName = uploadToBucket;
uploadPart.objectkey = uploadObjectkey;
uploadPart.uploadId = uploadId;
uploadPart.partNumber = i;
NSFileHandle* readHandle = [NSFileHandle fileHandleForReadingAtPath:filePath];
[readHandle seekToFileOffset:offset * (i -1)];
NSData* data = [readHandle readDataOfLength:offset];
uploadPart.uploadPartData = data;
OSSTask * uploadPartTask = [client uploadPart:uploadPart];
[uploadPartTask waitUntilFinished];
if (!uploadPartTask.error) {
OSSUploadPartResult * result = uploadPartTask.result;
uint64_t fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:uploadPart.uploadPartFileURL.absoluteString error:nil] fileSize];
[partInfos addObject:[OSSPartInfo partInfoWithPartNum:i eTag:result.eTag size:fileSize]];
} else {
NSLog(@"upload part error: %@", uploadPartTask.error);
return;
}
}
OSSCompleteMultipartUploadRequest * complete = [OSSCompleteMultipartUploadRequest new];
complete.bucketName = uploadToBucket;
complete.objectKey = uploadObjectkey;
complete.uploadId = uploadId;
complete.partInfos = partInfos;
OSSTask * completeTask = [client completeMultipartUpload:complete];
[[completeTask continueWithBlock:^id(OSSTask *task) {
if (!task.error) {
OSSCompleteMultipartUploadResult * result = task.result;
} else {
}
return nil;
}] waitUntilFinished];
For information about sample code in other scenarios, see Multipart upload.
The following sample code provides an example on how to implement multipart upload by using OSS SDK for C:
#include "oss_api.h"
#include "aos_http_io.h"
#include <sys/stat.h>
const char *endpoint = "yourEndpoint";
const char *bucket_name = "examplebucket";
const char *object_name = "exampledir/exampleobject.txt";
const char *local_filename = "yourLocalFilename";
const char *region = "yourRegion";
void init_options(oss_request_options_t *options)
{
options->config = oss_config_create(options->pool);
aos_str_set(&options->config->endpoint, endpoint);
aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID"));
aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET"));
aos_str_set(&options->config->region, region);
options->config->signature_version = 4;
options->config->is_cname = 0;
options->ctl = aos_http_controller_create(options->pool, 0);
}
int64_t get_file_size(const char *file_path)
{
int64_t filesize = -1;
struct stat statbuff;
if(stat(file_path, &statbuff) < 0){
return filesize;
} else {
filesize = statbuff.st_size;
}
return filesize;
}
int main(int argc, char *argv[])
{
if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
exit(1);
}
aos_pool_t *pool;
aos_pool_create(&pool, NULL);
oss_request_options_t *oss_client_options;
oss_client_options = oss_request_options_create(pool);
init_options(oss_client_options);
aos_string_t bucket;
aos_string_t object;
oss_upload_file_t *upload_file = NULL;
aos_string_t upload_id;
aos_table_t *headers = NULL;
aos_table_t *complete_headers = NULL;
aos_table_t *resp_headers = NULL;
aos_status_t *resp_status = NULL;
aos_str_set(&bucket, bucket_name);
aos_str_set(&object, object_name);
aos_str_null(&upload_id);
headers = aos_table_make(pool, 1);
complete_headers = aos_table_make(pool, 1);
int part_num = 1;
resp_status = oss_init_multipart_upload(oss_client_options, &bucket, &object, &upload_id, headers, &resp_headers);
if (aos_status_is_ok(resp_status)) {
printf("Init multipart upload succeeded, upload_id:%.*s\n",
upload_id.len, upload_id.data);
} else {
printf("Init multipart upload failed, upload_id:%.*s\n",
upload_id.len, upload_id.data);
}
int64_t file_length = 0;
int64_t pos = 0;
aos_list_t complete_part_list;
oss_complete_part_content_t* complete_content = NULL;
char* part_num_str = NULL;
char* etag = NULL;
aos_list_init(&complete_part_list);
file_length = get_file_size(local_filename);
while(pos < file_length) {
upload_file = oss_create_upload_file(pool);
aos_str_set(&upload_file->filename, local_filename);
upload_file->file_pos = pos;
pos += 100 * 1024;
upload_file->file_last = pos < file_length ? pos : file_length;
resp_status = oss_upload_part_from_file(oss_client_options, &bucket, &object, &upload_id, part_num++, upload_file, &resp_headers);
complete_content = oss_create_complete_part_content(pool);
part_num_str = apr_psprintf(pool, "%d", part_num-1);
aos_str_set(&complete_content->part_number, part_num_str);
etag = apr_pstrdup(pool,
(char*)apr_table_get(resp_headers, "ETag"));
aos_str_set(&complete_content->etag, etag);
aos_list_add_tail(&complete_content->node, &complete_part_list);
if (aos_status_is_ok(resp_status)) {
printf("Multipart upload part from file succeeded\n");
} else {
printf("Multipart upload part from file failed\n");
}
}
resp_status = oss_complete_multipart_upload(oss_client_options, &bucket, &object, &upload_id,
&complete_part_list, complete_headers, &resp_headers);
if (aos_status_is_ok(resp_status)) {
printf("Complete multipart upload from file succeeded, upload_id:%.*s\n",
upload_id.len, upload_id.data);
} else {
printf("Complete multipart upload from file failed\n");
}
aos_pool_destroy(pool);
aos_http_io_deinitialize();
return 0;
}
For information about sample code in other scenarios, see Multipart upload.
API operations
The methods described above are fundamentally implemented based on the RESTful API, which you can directly call if your business requires a high level of customization. To directly call an API, you must include the signature calculation in your code.
For information about the API operation that you can call to initiate a multipart upload task, see InitiateMultipartUpload.
For information about the API operation that you can call to upload parts, see UploadPart.
For information about the API operation that you can call to upload a part by copying data from an existing object, see UploadPartCopy.
For information about the API operation that you can call to complete a multipart upload task, see CompleteMultipartUpload.
For information about the API operation that you can call to cancel a multipart upload task and delete the parts generated in the task, see AbortMultipartUpload.
For more information about the API operation that you can call to list all ongoing multipart upload tasks, see ListMultipartUploads.
For information about the API operation that you can call to list all uploaded parts, see ListParts.
Permissions
Permissions
By default, an Alibaba Cloud account has full permissions on resources in the account. In contrast, RAM users and RAM roles associated with an Alibaba Cloud account initially have no permissions. To manage resources by using a RAM user or role, you must grant the required permissions via RAM policies or Bucket policies.
API | Action | Description |
InitiateMultipartUpload | oss:PutObject
| Initializes multipart upload tasks. |
oss:PutObjectTagging
| Specifies the tags of the object by using the x-oss-tagging header when you initialize a multipart upload task. |
kms:GenerateDataKey
| Required if the object's metadata includes X-Oss-Server-Side-Encryption: KMS. |
kms:Decrypt
|
API | Action | Description |
UploadPart | oss:PutObject
| Uploads parts. |
API | Action | Description |
UploadPartCopy | oss:GetObject | Reads data in the source object when you upload a part by copying data from an existing object. |
oss:GetObjectVersion | Reads the version ID of the source object when you upload a part by copying data from an existing object. |
oss:PutObject | Writes data to the destination object when you upload a part by copying data from an existing object. |
API | Action | Description |
CompleteMultipartUpload | oss:PutObject
| Combines parts into an object. |
oss:PutObjectTagging
| Specifies the tags of the object by using the x-oss-tagging header when you combine parts into an object. |
API | Action | Description |
AbortMultipartUpload | oss:AbortMultipartUpload
| Cancels a multipart upload task and deletes the uploaded parts. |
API | Action | Description |
ListMultipartUploads | oss:ListMultipartUploads
| Lists all ongoing multipart upload tasks, which include tasks that are initiated but are not completed or canceled. |
API | Action | Description |
ListParts | oss:ListParts
| Lists all parts that are uploaded by using a specified upload ID. |
Billing
When you upload a local file to OSS by using multipart upload, fees are generated. For information about the billable items and pricing details, see OSS Fees.
API | Billable item | Description |
API | Billable item | Description |
InitiateMultipartUpload | PUT requests | You are charged request fees based on the number of successful requests. |
API | Billable item | Description |
API | Billable item | Description |
UploadPart | PUT requests | You are charged request fees based on the number of successful requests. |
Storage fees | You are charged storage fees based on the storage class, size, and storage duration of the parts. The storage class of the parts of an object is the same as that of the object. However, the parts do not have a minimum billable size. If a part is less than 64 KB in size, the part is still calculated based on the actual size. After you upload parts to OSS, you are charged storage fees unless the parts are deleted or combined into an object, regardless of whether the parts are accessed or whether operations are performed on the parts. |
API | Billable item | Description |
API | Billable item | Description |
UploadPartCopy | PUT requests | You are charged request fees based on the number of successful requests. |
API | Billable item | Description |
API | Billable item | Description |
CompleteMultipartUpload | PUT requests | You are charged request fees based on the number of successful requests. |
Storage fees | You are charged storage fees based on the storage class, size, and storage duration of the object. After you combine parts in OSS into a complete object and store the object in OSS, you are charged storage fees, regardless of whether the object is accessed or whether operations are performed on the object. |
API | Billable item | Description |
API | Billable item | Description |
AbortMultipartUpload | PUT requests | You are charged request fees based on the number of successful requests. Important In regions in the Chinese mainland, when an IA, Archive, or Cold Archive part is deleted based on lifecycle rules, you are charged higher PUT request fees compared with the PUT request fees when a Standard part is deleted. You are not charged PUT request fees when Deep Cold Archive parts are deleted based on lifecycle rules. In the China (Hong Kong) region and regions outside the Chinese mainland, you are not charged PUT request fees when parts of any storage class are deleted based on lifecycle rules.
|
API | Billable item | Description |
API | Billable item | Description |
ListMultipartUploads | PUT requests | You are charged request fees based on the number of successful requests. |
API | Billable item | Description |
API | Billable item | Description |
ListParts | PUT requests | You are charged request fees based on the number of successful requests. |
Limits
Limit | Description |
Size of an object | Up to 48.8 TB. |
Number of parts | 1 to 10000. |
Size of a part | 100 KB to 5 GB. The size of the last part can be less than 100 KB. |
Maximum number of parts that can be returned for a single ListParts request | 1,000 |
Maximum number of multipart upload tasks that can be returned for a single ListMultipartUploads request | 1,000 |
FAQ
Can I upload a directory by using multipart upload?
No. You cannot upload a directory by using multipart upload.
How do I optimize the upload performance when I upload a large number of objects?
If you upload a large number of objects whose names contain sequential prefixes, such as timestamps and letters, multiple object indexes may be stored in a single partition. If a large number of requests are sent to query the objects, the latency increases. We recommend that you do not upload a large number of objects whose names contain sequential prefixes. For more information, see OSS performance best practices.
How do I prevent high PUT request fees for objects whose storage class is Deep Cold Archive?
If you want to upload a large number of objects and set the storage classes of the objects to Deep Cold Archive, you are charged high PUT request fees. We recommend that you set the storage classes of the objects to Standard when you upload the objects, and configure lifecycle rules to convert the storage classes of the Standard objects to Deep Cold Archive. This reduces PUT request fees.
How do I prevent objects from being accidentally overwritten?
If you upload an object whose name is the same as an existing object in OSS, the existing object is overwritten by the uploaded object. You can use one of the following methods to prevent objects from being unexpectedly overwritten:
Enable versioning: After you enable versioning for a bucket, overwritten objects are saved as previous versions. You can restore an object to a previous version at any time. For more information, see Versioning.
Include the header used to prevent objects that have the same names from being overwritten in the request: Include the x-oss-forbid-overwrite header in the upload request and set the header to true
. This way, when upload an object that has the same name as an existing object is uploaded, the upload fails and the FileAlreadyExists
error is returned. For more information, see InitiateMultipartUpload.
How do I delete parts?
If a multipart upload task is interrupted and the AbortMultipartUpload operation is not called, the parts that are uploaded by the task are stored in the specified bucket. If you no longer require the parts, you can use one of the following methods to delete the parts to prevent unnecessary storage costs:
How do I list parts?
If you want to list parts that are uploaded by using a specific upload ID, you can call the ListParts operation. For more information, see ListParts.
If you want to list multipart upload tasks that have been initiated but are incomplete or canceled, you can call the ListMultipartUploads operation. For more information, see ListMultipartUploads.
Can I use multipart upload to upload a local file that is encrypted and compressed to OSS?
Yes. You can use multipart upload to upload a local file that is encrypted and compressed to OSS.
Are uploaded parts overwritten when I re-upload the parts after a multipart upload task is interrupted?
After a multipart upload task is interrupted, if you use the original upload ID to re-upload all parts, the uploaded parts that have the same name are overwritten. If you use a new upload ID to re-upload all parts, the uploaded parts that have the original upload ID are retained.
What is an upload ID in multipart upload?
An upload ID uniquely identifies a multipart upload task. Part numbers identify the relative positions of parts that share the same upload ID.
How long is the upload ID valid during a multipart upload task?
The upload ID remains valid during the multipart upload process. The upload ID becomes invalid after the multipart upload task is complete or canceled. If you want to perform another multipart upload task, you must reinitiate a multipart upload task to generate a new upload ID.
Does OSS automatically combine parts?
No. OSS does not automatically combine parts. You must manually combine parts into a complete object by calling the CompleteMultipartUpload operation.