サイズが5 GBを超えるオブジェクトをobject Storage Service (OSS) にアップロードすると、ネットワークの中断やプログラムのクラッシュにより、オブジェクトのアップロードに失敗することがあります。 複数回の再試行後にオブジェクトのアップロードに失敗した場合は、マルチパートアップロードを実行してラージオブジェクトをアップロードできます。 マルチパートアップロードを使用すると、アップロードするラージオブジェクトを複数のパーツに分割し、ネットワーク帯域幅とサーバーリソースに基づいてパーツを並行してアップロードできます。 このようにして、オブジェクトをアップロードするのに必要な時間が短縮される。 パーツがアップロードされたら、CompleteMultipartUpload操作を呼び出して、パーツを完全なオブジェクトに結合する必要があります。
前提条件
バケット作成についての 詳細は、「バケットの作成」をご参照ください。
oss:PutObject
権限がRAMユーザーに付与されます。 詳細については、「RAMポリシーの一般的な例」をご参照ください。
シナリオ
大きなオブジェクトの高速アップロード
サイズが5 GBを超えるオブジェクトをアップロードする場合は、マルチパートアップロードを使用してオブジェクトを複数のパーツに分割し、パーツを並列にアップロードしてアップロードを高速化できます。
不安定なネットワーク接続
ネットワーク接続が不安定な場合は、マルチパートアップロードを使用することを推奨します。 特定のパーツのアップロードに失敗した場合は、失敗したパーツのみを再アップロードする必要があります。
不明なオブジェクトサイズ
アップロードするオブジェクトのサイズがわからない場合は、マルチパートアップロードを使用できます。 このシナリオは、ビデオ監視などの産業用アプリケーションで一般的です。
処理中
前述のプロセスは、以下のステップからなる。
マルチパートアップロードタスクを完了するには、oss:PutObject
権限が必要です。 マルチパートアップロードタスクは、マルチパートアップロードの開始、パートアップロード、およびパートの組み合わせの3つのステップで構成されます。 詳細については、「RAMユーザーへのカスタムポリシーのアタッチ」をご参照ください。
アップロードするオブジェクトを、特定のパーツサイズに基づいてパーツに分割します。
InitiateMultipartUpload操作を実行してマルチパートアップロードタスクを開始します。
UploadPartを呼び出し、部品をアップロードします。
オブジェクトをパーツに分割した後、各パーツの
partNumber
パラメーターを設定して、パーツの順序を指定します。 このようにして、パーツは並行してアップロードされます。 並行してアップロードを増やすと、必ずしもアップロードが加速されません。 ネットワークの状態とデバイスのワークロードに基づいて、並列アップロードの数を指定することをお勧めします。マルチパートアップロードタスクをキャンセルするには、AbortMultipartUpload操作を呼び出します。 マルチパートアップロードタスクがキャンセルされると、アップロードされたパーツは削除されます。
CompleteMultipartUploadを呼び出し、アップロードされたパーツを完全なオブジェクトに結合する操作。
制限事項
項目 | 説明 |
オブジェクトサイズ | マルチパートアップロードは、サイズが最大48.8テラバイトのオブジェクトをサポートします。 |
部品の数 | 部品の数を1から10,000の範囲の値に設定できます。 |
パーツサイズ | 最後の部品を除く部品のサイズは、100 KBから5 GBの範囲でなければなりません。 最後の部分のサイズは100 KB未満にすることができます。 |
1つのListPartsリクエストに対して返すことができるパーツの最大数 | 1回のListPartsリクエストで最大1,000個のパーツを返すことができます。 |
1つのListMultipartUploadsリクエストに対して返すことができるマルチパートアップロードタスクの最大数 | 1つのListMultipartUploadsリクエストに対して最大1,000のタスクを返すことができます。 |
使用上の注意
マルチパートアップロードを使用してオブジェクトをアップロードする場合、一度にアップロードできるオブジェクトは1つだけで、ディレクトリをアップロードできません。
PUTリクエスト料金の引き下げ
多数のオブジェクトをアップロードし、オブジェクトのストレージクラスをDeep Cold Archiveに設定する場合は、高いPUTリクエスト料金が請求されます。 オブジェクトのアップロード時にオブジェクトのストレージクラスを標準に設定し、標準オブジェクトのストレージクラスをDeep Cold Archiveに変換するようにライフサイクルルールを設定することを推奨します。 これにより、PUTリクエスト料金が削減されます。
オブジェクトのアップロードのパフォーマンスの最適化
名前にタイムスタンプや文字などの連続したプレフィックスが含まれているオブジェクトを多数アップロードすると、複数のオブジェクトインデックスが1つのパーティションに格納される可能性があります。 これらのオブジェクトを照会するために多数のリクエストが送信されると、レイテンシが増加します。 名前にシーケンシャルプレフィックスが含まれているオブジェクトを多数アップロードしないことをお勧めします。 詳細については、「OSSパフォーマンスのベストプラクティス」をご参照ください。
オブジェクトの上書き
OSSで既存のオブジェクトと同じ名前のオブジェクトをアップロードすると、既存のオブジェクトはアップロードされたオブジェクトによって上書きされます。 次のいずれかの方法を使用して、オブジェクトが予期せず上書きされないようにします。
バージョン管理の有効化
バケットのバージョン管理が有効になっている場合、上書きされたオブジェクトは以前のバージョンとして保存されます。 以前のバージョンを復元できます。 詳細については、「概要」をご参照ください。
アップロードリクエストにx-oss-forbid-overwriteを含める
アップロードリクエストにx-oss-forbid-overwriteヘッダーを含め、ヘッダーを
true
に設定します。 この方法では、既存のオブジェクトと同じ名前のオブジェクトをアップロードすると、アップロードは失敗し、FileAlreadyExists
エラーが返されます。 詳細については、「InitiateMultipartUpload」をご参照ください。
方法
OSS SDKの使用
次のサンプルコードは、一般的なプログラミング言語でOSS SDKを使用してマルチパートアップロードを実行する方法の例を示しています。 他のプログラミング言語でOSS SDKを使用してマルチパートアップロードを実行する方法については、「概要」をご参照ください。
import com.aliyun.oss.ClientException;
import com.aliyun.oss.OSS;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.OSSClientBuilder;
import com.aliyun.oss.OSSException;
import com.aliyun.oss.internal.Mimetypes;
import com.aliyun.oss.model.*;
import java.io.File;
import java.io.FileInputStream;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
public class Demo {
public static void main(String[] args) throws Exception {
// In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint.
String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.
EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
// Specify the name of the bucket. Example: examplebucket.
String bucketName = "examplebucket";
// Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path of the object.
String objectName = "exampledir/exampleobject.txt";
// Specify the full path of the local file that you want to upload.
String filePath = "D:\\localpath\\examplefile.txt";
// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
String region = "cn-hangzhou";
// Create an OSSClient instance.
ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);
OSS ossClient = OSSClientBuilder.create()
.endpoint(endpoint)
.credentialsProvider(credentialsProvider)
.clientConfiguration(clientBuilderConfiguration)
.region(region)
.build();
try {
// Create an InitiateMultipartUploadRequest object.
InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, objectName);
// The following sample code provides an example on how to specify the request headers when you initiate a multipart upload task:
ObjectMetadata metadata = new ObjectMetadata();
// metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString());
// Specify the caching behavior of the web page for the object.
// metadata.setCacheControl("no-cache");
// Specify the name of the downloaded object.
// metadata.setContentDisposition("attachment;filename=oss_MultipartUpload.txt");
// Specify the content encoding format of the object.
// metadata.setContentEncoding(OSSConstants.DEFAULT_CHARSET_NAME);
// Specify whether existing objects are overwritten by objects that have the same names when the multipart upload task is initiated. In this example, this parameter is set to true, which indicates that existing objects cannot be overwritten by objects with the same names.
// metadata.setHeader("x-oss-forbid-overwrite", "true");
// Specify the server-side encryption method that you want to use to encrypt each part of the object that you want to upload.
// metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
// Specify the algorithm that you want to use to encrypt the object. If you do not specify this parameter, the object is encrypted by using AES-256.
// metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_DATA_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
// Specify the ID of the customer master key (CMK) that is managed by Key Management Service (KMS).
// metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION_KEY_ID, "9468da86-3509-4f8d-a61e-6eab1eac****");
// Specify the storage class of the object.
// metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard);
// Specify one or more tags for the object.
// metadata.setHeader(OSSHeaders.OSS_TAGGING, "a:1");
// request.setObjectMetadata(metadata);
// Specify ContentType based on the object type. If you do not specify this parameter, the default value of ContentType is used, which is application/oct-srream.
if (metadata.getContentType() == null) {
metadata.setContentType(Mimetypes.getInstance().getMimetype(new File(filePath), objectName));
}
// Initiate the multipart upload task.
InitiateMultipartUploadResult upresult = ossClient.initiateMultipartUpload(request);
// Obtain the upload ID.
String uploadId = upresult.getUploadId();
// Cancel the multipart upload task or list uploaded parts based on the upload ID.
// If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.
// If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task.
// System.out.println(uploadId);
// partETags is a set of PartETags. A PartETag consists of the part number and ETag of an uploaded part.
List<PartETag> partETags = new ArrayList<PartETag>();
// Specify the size of each part, which is used to calculate the number of parts of the object. Unit: bytes.
final long partSize = 1 * 1024 * 1024L; // Set the part size to 1 MB.
// Calculate the number of parts based on the size of the uploaded data. In the following sample code, a local file is used as an example to describe how to use the File.length() method to obtain the size of the uploaded data.
final File sampleFile = new File(filePath);
long fileLength = sampleFile.length();
int partCount = (int) (fileLength / partSize);
if (fileLength % partSize != 0) {
partCount++;
}
// Upload all parts.
for (int i = 0; i < partCount; i++) {
long startPos = i * partSize;
long curPartSize = (i + 1 == partCount) ? (fileLength - startPos) : partSize;
UploadPartRequest uploadPartRequest = new UploadPartRequest();
uploadPartRequest.setBucketName(bucketName);
uploadPartRequest.setKey(objectName);
uploadPartRequest.setUploadId(uploadId);
// Specify the input stream of the multipart upload task.
// In the following sample code, a local file is used as an example to describe how to create a FIleInputstream and use the InputStream.skip() method to skip the specified data.
InputStream instream = new FileInputStream(sampleFile);
instream.skip(startPos);
uploadPartRequest.setInputStream(instream);
// Specify the part size. The size of each part except for the last part must be greater than or equal to 100 KB.
uploadPartRequest.setPartSize(curPartSize);
// Specify part numbers. Each part has a part number that ranges from 1 to 10,000. If the part number that you specify does not fall within the specified range, OSS returns the InvalidArgument error code.
uploadPartRequest.setPartNumber( i + 1);
// Parts are not necessarily uploaded in order and can be uploaded from different OSS clients. OSS sorts the parts based on the part numbers and combines the parts into a complete object.
UploadPartResult uploadPartResult = ossClient.uploadPart(uploadPartRequest);
// Each time a part is uploaded, OSS returns a result that contains the PartETag of the part. The PartETag is stored in partETags.
partETags.add(uploadPartResult.getPartETag());
}
// Create a CompleteMultipartUploadRequest object.
// When you call the CompleteMultipartUpload operation, you must provide all valid PartETags. After OSS receives the PartETags, OSS verifies all parts one by one. After all parts are verified, OSS combines these parts into a complete object.
CompleteMultipartUploadRequest completeMultipartUploadRequest =
new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
// The following sample code provides an example on how to configure the access control list (ACL) of the object when you initiate a multipart upload task:
// completeMultipartUploadRequest.setObjectACL(CannedAccessControlList.Private);
// Specify whether to list all parts that are uploaded by using the current upload ID. For OSS SDK for Java 3.14.0 and later, you can set PartETags in CompleteMultipartUploadRequest to null only when you list all parts uploaded to the OSS server to combine the parts into a complete object.
// Map<String, String> headers = new HashMap<String, String>();
// If you set x-oss-complete-all to yes in the request, OSS lists all parts that are uploaded by using the current upload ID, sorts the parts by part number, and then performs the CompleteMultipartUpload operation.
// If you set x-oss-complete-all to yes in the request, the request body cannot be specified. If you specify the request body, an error is reported.
// headers.put("x-oss-complete-all","yes");
// completeMultipartUploadRequest.setHeaders(headers);
// Complete the multipart upload task.
CompleteMultipartUploadResult completeMultipartUploadResult = ossClient.completeMultipartUpload(completeMultipartUploadRequest);
System.out.println(completeMultipartUploadResult.getETag());
} catch (OSSException oe) {
System.out.println("Caught an OSSException, which means your request made it to OSS, "
+ "but was rejected with an error response for some reason.");
System.out.println("Error Message:" + oe.getErrorMessage());
System.out.println("Error Code:" + oe.getErrorCode());
System.out.println("Request ID:" + oe.getRequestId());
System.out.println("Host ID:" + oe.getHostId());
} catch (ClientException ce) {
System.out.println("Caught an ClientException, which means the client encountered "
+ "a serious internal problem while trying to communicate with OSS, "
+ "such as not being able to access the network.");
System.out.println("Error Message:" + ce.getMessage());
} finally {
if (ossClient != null) {
ossClient.shutdown();
}
}
}
}
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
require_once __DIR__ . '/../vendor/autoload.php';
}
use OSS\Credentials\EnvironmentVariableCredentialsProvider;
use OSS\OssClient;
use OSS\CoreOssException;
use OSS\Core\OssUtil;
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.
$provider = new EnvironmentVariableCredentialsProvider();
// In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint.
$endpoint = 'https://oss-cn-hangzhou.aliyuncs.com';
// Specify the name of the bucket. Example: examplebucket.
$bucket= 'examplebucket';
// Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt.
$object = 'exampledir/exampleobject.txt';
// Specify the full path of the local file that you want to upload.
$uploadFile = 'D:\\localpath\\examplefile.txt';
$initOptions = array(
OssClient::OSS_HEADERS => array(
// Specify the caching behavior of the web page when the object is downloaded.
// 'Cache-Control' => 'no-cache',
//Specify the name of the object when the object is downloaded.
// 'Content-Disposition' => 'attachment;filename=oss_download.jpg',
// Specify the content encoding format of the object when the object is downloaded.
// 'Content-Encoding' => 'utf-8',
// Specify the validity period of the request. Unit: milliseconds.
// 'Expires' => 150,
// Specify whether the object that is uploaded by using multipart upload overwrites the existing object that has the same name when the multipart upload task is initialized. In this example, this parameter is set to true, which specifies that the uploaded object that has the same name as the existing object does not overwrite the existing object.
//'x-oss-forbid-overwrite' => 'true',
// Specify the server-side encryption method that you want to use to encrypt each part of the object.
// 'x-oss-server-side-encryption'=> 'KMS',
// Specify the algorithm that you want to use to encrypt the object.
// 'x-oss-server-side-data-encryption'=>'SM4',
// Specify the ID of the customer master key (CMK) that is managed by Key Management Service (KMS).
//'x-oss-server-side-encryption-key-id' => '9468da86-3509-4f8d-a61e-6eab1eac****',
// Specify the storage class of the object.
// 'x-oss-storage-class' => 'Standard',
// Specify tags for the object. You can specify multiple tags for the object at a time.
// 'x-oss-tagging' => 'TagA=A&TagB=B',
),
);
/**
* Step 1: Initiate a multipart upload task and obtain the upload ID.
*/
try{
$config = array(
"provider" => $provider,
"endpoint" => $endpoint,
"signatureVersion" => OssClient::OSS_SIGNATURE_VERSION_V4,
"region"=> "cn-hangzhou"
);
$ossClient = new OssClient($config);
// Obtain the upload ID. The upload ID is the unique identifier of a multipart upload task. You can perform related operations such as canceling or querying the multipart upload task based on the upload ID.
$uploadId = $ossClient->initiateMultipartUpload($bucket, $object, $initOptions);
print("initiateMultipartUpload OK" . "\n");
// Cancel the multipart upload task or list uploaded parts based on the upload ID.
// If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.
// If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task.
//print("UploadId: " . $uploadId . "\n");
} catch(OssException $e) {
printf($e->getMessage() . "\n");
return;
}
/*
* Step 2: Upload parts.
*/
$partSize = 10 * 1024 * 1024;
$uploadFileSize = sprintf('%u',filesize($uploadFile));
$pieces = $ossClient->generateMultiuploadParts($uploadFileSize, $partSize);
$responseUploadPart = array();
$uploadPosition = 0;
$isCheckMd5 = true;
foreach ($pieces as $i => $piece) {
$fromPos = $uploadPosition + (integer)$piece[$ossClient::OSS_SEEK_TO];
$toPos = (integer)$piece[$ossClient::OSS_LENGTH] + $fromPos - 1;
$upOptions = array(
// Upload the object.
$ossClient::OSS_FILE_UPLOAD => $uploadFile,
// Specify part numbers.
$ossClient::OSS_PART_NUM => ($i + 1),
// Specify the position from which the multipart upload task starts.
$ossClient::OSS_SEEK_TO => $fromPos,
// Specify the object length.
$ossClient::OSS_LENGTH => $toPos - $fromPos + 1,
// Specify whether to enable MD5 verification. The value true specifies that MD5 verification is enabled.
$ossClient::OSS_CHECK_MD5 => $isCheckMd5,
);
// Enable MD5 verification.
if ($isCheckMd5) {
$contentMd5 = OssUtil::getMd5SumForFile($uploadFile, $fromPos, $toPos);
$upOptions[$ossClient::OSS_CONTENT_MD5] = $contentMd5;
}
try {
// Upload the parts.
$responseUploadPart[] = $ossClient->uploadPart($bucket, $object, $uploadId, $upOptions);
printf("initiateMultipartUpload, uploadPart - part#{$i} OK\n");
} catch(OssException $e) {
printf("initiateMultipartUpload, uploadPart - part#{$i} FAILED\n");
printf($e->getMessage() . "\n");
return;
}
}
// $uploadParts is an array that consists of the ETag and part number of each part.
$uploadParts = array();
foreach ($responseUploadPart as $i => $eTag) {
$uploadParts[] = array(
'PartNumber' => ($i + 1),
'ETag' => $eTag,
);
}
/**
* Step 3: Complete the multipart upload task.
*/
$comOptions['headers'] = array(
// Specify whether the object that is uploaded by using multipart upload overwrites the existing object that has the same name when the multipart upload task is complete. In this example, this parameter is set to true, which specifies that the uploaded object that has the same name as the existing object does not overwrite the existing object.
// 'x-oss-forbid-overwrite' => 'true',
// If you set the x-oss-complete-all parameter to yes, OSS lists all parts that are uploaded by using the current upload ID, sorts the parts by part number, and then performs the CompleteMultipartUpload operation.
// 'x-oss-complete-all'=> 'yes'
);
try {
// All valid values of the $uploadParts parameter are required for the CompleteMultipartUpload operation. After OSS receives the values of the $uploadParts parameter, OSS verifies all parts one by one. After all parts are verified, OSS combines the parts into a complete object.
$ossClient->completeMultipartUpload($bucket, $object, $uploadId, $uploadParts,$comOptions);
printf( "Complete Multipart Upload OK\n");
} catch(OssException $e) {
printf("Complete Multipart Upload FAILED\n");
printf($e->getMessage() . "\n");
return;
}
const OSS = require('ali-oss');
const path = require("path");
const client = new OSS({
// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou.
region: 'yourregion',
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.
accessKeyId: process.env.OSS_ACCESS_KEY_ID,
accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
authorizationV4: true,
// Specify the name of the bucket.
bucket: 'yourbucketname',
});
const progress = (p, _checkpoint) => {
// Record the upload progress of the object.
console.log(p);
// Record the checkpoint information about the multipart upload task.
console.log(_checkpoint);
};
const headers = {
// Specify the storage class of the object.
'x-oss-storage-class': 'Standard',
// Specify tags for the object. You can specify multiple tags for the object.
'x-oss-tagging': 'Tag1=1&Tag2=2',
// Specify whether to overwite an existing object with the same name when the multipart upload task is initialized. In this example, this parameter is set to true, which indicates that an existing object with the same name as the object to upload is not overwritten.
'x-oss-forbid-overwrite': 'true'
}
// Start the multipart upload task.
async function multipartUpload() {
try {
// Specify the full path of the object. Example: exampledir/exampleobject.txt. Then, specify the full path of the local file. Example: D:\\localpath\\examplefile.txt. Do not include the bucket name in the full path.
// By default, if you set this parameter to the name of a local file such as examplefile.txt without specifying the local path, the local file is uploaded from the local path of the project to which the sample program belongs.
const result = await client.multipartUpload('exampledir/exampleobject.txt', path.normalize('D:\\localpath\\examplefile.txt'), {
progress,
// headers,
// Configure the meta parameter to specify metadata for the object. You can call the HeadObject operation to obtain the object metadata.
meta: {
year: 2020,
people: 'test',
},
});
console.log(result);
// Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path.
const head = await client.head('exampledir/exampleobject.txt');
console.log(head);
} catch (e) {
// Handle timeout exceptions.
if (e.code === 'ConnectionTimeoutError') {
console.log('TimeoutError');
// do ConnectionTimeoutError operation
}
console.log(e);
}
}
multipartUpload();
# -*- coding: utf-8 -*-
import os
from oss2 import SizedFileAdapter, determine_part_size
from oss2.models import PartInfo
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())
# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com.
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
region = "cn-hangzhou"
# Specify the name of your bucket.
bucket = oss2.Bucket(auth, endpoint, "yourBucketName", region=region)
# Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt.
key = 'exampledir/exampleobject.txt'
# Specify the full path of the local file that you want to upload. Example: D:\\localpath\\examplefile.txt.
filename = 'D:\\localpath\\examplefile.txt'
total_size = os.path.getsize(filename)
# Use the determine_part_size method to determine the part size.
part_size = determine_part_size(total_size, preferred_size=100 * 1024)
# Initiate a multipart upload task.
# If you want to specify the storage class of the object when you initiate the multipart upload task, configure the related headers when you use the init_multipart_upload method.
# headers = dict()
# Specify the caching behavior of the web page for the object.
# headers['Cache-Control'] = 'no-cache'
# Specify the name of the object when it is downloaded.
# headers['Content-Disposition'] = 'oss_MultipartUpload.txt'
# Specify the content encoding format of the object.
# headers['Content-Encoding'] = 'utf-8'
# Specify the validity period. Unit: milliseconds.
# headers['Expires'] = '1000'
# Specify whether the object that is uploaded by performing multipart upload overwrites the existing object that has the same name when the multipart upload task is initiated. In this example, this parameter is set to true, which indicates that the object with the same name cannot be overwritten.
# headers['x-oss-forbid-overwrite'] = 'true'
# Specify the server-side encryption method that you want to use to encrypt each part.
# headers[OSS_SERVER_SIDE_ENCRYPTION] = SERVER_SIDE_ENCRYPTION_KMS
# Specify the algorithm that you want to use to encrypt the object. If you do not configure this parameter, the object is encrypted by using AES-256.
# headers[OSS_SERVER_SIDE_DATA_ENCRYPTION] = SERVER_SIDE_ENCRYPTION_KMS
# Specify the ID of the Customer Master Key (CMK) that is managed by Key Management Service (KMS).
# headers[OSS_SERVER_SIDE_ENCRYPTION_KEY_ID] = '9468da86-3509-4f8d-a61e-6eab1eac****'
# Specify the storage class of the object.
# headers['x-oss-storage-class'] = oss2.BUCKET_STORAGE_CLASS_STANDARD
# Specify tags for the object. You can specify multiple tags for the object at the same time.
# headers[OSS_OBJECT_TAGGING] = 'k1=v1&k2=v2&k3=v3'
# upload_id = bucket.init_multipart_upload(key, headers=headers).upload_id
upload_id = bucket.init_multipart_upload(key).upload_id
# Cancel the multipart upload task or list uploaded parts based on the upload ID.
# If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.
# If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task.
# print("UploadID:", upload_id)
parts = []
# Upload the parts.
with open(filename, 'rb') as fileobj:
part_number = 1
offset = 0
while offset < total_size:
num_to_upload = min(part_size, total_size - offset)
# Use the SizedFileAdapter(fileobj, size) method to generate a new object and recalculate the position from which the append operation starts.
result = bucket.upload_part(key, upload_id, part_number,
SizedFileAdapter(fileobj, num_to_upload))
parts.append(PartInfo(part_number, result.etag))
offset += num_to_upload
part_number += 1
# Complete the multipart upload task.
# Configure headers (if you want to) when you complete the multipart upload task.
headers = dict()
# Specify the access control list (ACL) of the object. In this example, the ACL is set to OBJECT_ACL_PRIVATE, which indicates that the ACL of the object is private.
# headers["x-oss-object-acl"] = oss2.OBJECT_ACL_PRIVATE
bucket.complete_multipart_upload(key, upload_id, parts, headers=headers)
# bucket.complete_multipart_upload(key, upload_id, parts)
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Document</title>
</head>
<body>
<button id="submit">Upload</button>
<input id="file" type="file" />
<!-- Import the SDK file -->
<script
type="text/javascript"
src="https://gosspublic.alicdn.com/aliyun-oss-sdk-6.18.0.min.js"
></script>
<script type="text/javascript">
const client = new OSS({
// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou.
region: "yourRegion",
authorizationV4: true,
// Specify the temporary AccessKey pair obtained from STS. The AccessKey pair consists of an AccessKey ID and an AccessKey secret.
accessKeyId: "yourAccessKeyId",
accessKeySecret: "yourAccessKeySecret",
// Specify the security token obtained from STS.
stsToken: "yourSecurityToken",
// Specify the name of the bucket. Example: examplebucket.
bucket: "examplebucket",
});
const headers = {
// Specify the caching behavior of the web page when the object is downloaded.
"Cache-Control": "no-cache",
// Specify the name of the object when the object is downloaded.
"Content-Disposition": "example.txt",
// Specify the content encoding format of the object when the object is downloaded.
"Content-Encoding": "utf-8",
// Specify the validity period of the request. Unit: milliseconds.
Expires: "1000",
// Specify the storage class of the object.
"x-oss-storage-class": "Standard",
// Specify one or more tags for the object.
"x-oss-tagging": "Tag1=1&Tag2=2",
// Specify whether to overwrite an existing object with the same name when the multipart upload task is initialized. In this example, the x-oss-forbid-overwrite parameter is set to true. This value specifies that an existing object cannot be overwritten by the object that has the same name.
"x-oss-forbid-overwrite": "true",
};
// Specify the name of the object that is uploaded to the examplebucket bucket. Example: exampleobject.txt.
const name = "exampleobject.txt";
// Obtain DOM.
const submit = document.getElementById("submit");
const options = {
// Query the progress, checkpoint, and return value of the multipart upload task.
progress: (p, cpt, res) => {
console.log(p);
},
// Specify the number of parts that can be uploaded in parallel.
parallel: 4,
// Specify the part size. Default value: 1 MB. Minimum value: 100 KB.
partSize: 1024 * 1024,
// headers,
// Specify the user metadata of the object. You can call the HeadObject operation to query the object metadata.
meta: { year: 2020, people: "test" },
mime: "text/plain",
};
// Configure an event listener.
submit.addEventListener("click", async () => {
try {
const data = document.getElementById("file").files[0];
// Start the multipart upload task.
const res = await client.multipartUpload(name, data, {
...options,
// Configure an upload callback.
// If no callback server is required, delete the callback configurations.
callback: {
// Specify the address of the server that receives the callback request.
url: "http://examplebucket.aliyuncs.com:23450",
// Specify the Host header in the callback request.
host: "yourHost",
/* eslint no-template-curly-in-string: [0] */
// Specify the body content of the callback request.
body: "bucket=${bucket}&object=${object}&var1=${x:var1}",
// Specify Content-Type in the callback request.
contentType: "application/x-www-form-urlencoded",
customValue: {
// Specify custom parameters for the callback request.
var1: "value1",
var2: "value2",
},
},
});
console.log(res);
} catch (err) {
console.log(err);
}
});
</script>
</body>
</html>
using Aliyun.OSS;
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com.
var endpoint = "yourEndpoint";
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.
var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID");
var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET");
// Specify the name of the bucket.
var bucketName = "examplebucket";
// Specify the full path of the object. Do not include the bucket name in the full path.
var objectName = "exampleobject.txt";
// Specify the full path of the local file that you want to upload. By default, if you do not specify the full path of a local file, the local file is uploaded from the path of the project to which the sample program belongs.
var localFilename = "D:\\localpath\\examplefile.txt";
// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
const string region = "cn-hangzhou";
// Create a ClientConfiguration instance and modify the default parameters based on your requirements.
var conf = new ClientConfiguration();
// Use the signature algorithm V4.
conf.SignatureVersion = SignatureVersion.V4;
// Create an OSSClient instance.
var client = new OssClient(endpoint, accessKeyId, accessKeySecret, conf);
c.SetRegion(region);
// Initiate the multipart upload task and obtain the upload ID in the response.
var uploadId = "";
try
{
// Specify the name of the uploaded object and the bucket for the object. You can configure object metadata in InitiateMultipartUploadRequest. However, you do not need to specify ContentLength.
var request = new InitiateMultipartUploadRequest(bucketName, objectName);
var result = client.InitiateMultipartUpload(request);
uploadId = result.UploadId;
// Display the upload ID.
Console.WriteLine("Init multi part upload succeeded");
// Cancel the multipart upload task or list uploaded parts based on the upload ID.
// If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.
// If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task.
Console.WriteLine("Upload Id:{0}", result.UploadId);
}
catch (Exception ex)
{
Console.WriteLine("Init multi part upload failed, {0}", ex.Message);
Environment.Exit(1);
}
// Calculate the total number of parts.
var partSize = 100 * 1024;
var fi = new FileInfo(localFilename);
var fileSize = fi.Length;
var partCount = fileSize / partSize;
if (fileSize % partSize != 0)
{
partCount++;
}
// Initialize parts and start the multipart upload task. partETags is a list of PartETags. After OSS receives the partETags, OSS verifies all parts one by one. After all parts are verified, OSS combines these parts into a complete object.
var partETags = new List<PartETag>();
try
{
using (var fs = File.Open(localFilename, FileMode.Open))
{
for (var i = 0; i < partCount; i++)
{
var skipBytes = (long)partSize * i;
// Find the start position of the current upload task.
fs.Seek(skipBytes, 0);
// Calculate the part size in this upload. The size of the last part is the size of the remainder after the object is split by the calculated part size.
var size = (partSize < fileSize - skipBytes) ? partSize : (fileSize - skipBytes);
var request = new UploadPartRequest(bucketName, objectName, uploadId)
{
InputStream = fs,
PartSize = size,
PartNumber = i + 1
};
// Call UploadPart to upload parts. The returned results contain the ETag values of parts.
var result = client.UploadPart(request);
partETags.Add(result.PartETag);
Console.WriteLine("finish {0}/{1}", partETags.Count, partCount);
}
Console.WriteLine("Put multi part upload succeeded");
}
}
catch (Exception ex)
{
Console.WriteLine("Put multi part upload failed, {0}", ex.Message);
Environment.Exit(1);
}
// Combine the parts after the parts are uploaded.
try
{
var completeMultipartUploadRequest = new CompleteMultipartUploadRequest(bucketName, objectName, uploadId);
foreach (var partETag in partETags)
{
completeMultipartUploadRequest.PartETags.Add(partETag);
}
var result = client.CompleteMultipartUpload(completeMultipartUploadRequest);
Console.WriteLine("complete multi part succeeded");
}
catch (Exception ex)
{
Console.WriteLine("complete multi part failed, {0}", ex.Message);
Environment.Exit(1);
}
// Specify the name of the bucket. Example: examplebucket.
String bucketName = "examplebucket";
// Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path.
String objectName = "exampledir/exampleobject.txt";
// Specify the full path of the local file. Example: /storage/emulated/0/oss/examplefile.txt.
String localFilepath = "/storage/emulated/0/oss/examplefile.txt";
// Initiate a multipart upload task.
InitiateMultipartUploadRequest init = new InitiateMultipartUploadRequest(bucketName, objectName);
InitiateMultipartUploadResult initResult = oss.initMultipartUpload(init);
// Obtain the upload ID.
String uploadId = initResult.getUploadId();
// Cancel the multipart upload task or list uploaded parts based on the upload ID.
// If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.
// If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task and before you call the CompleteMultipartUpload operation to complete the multipart upload task.
// Log.d("uploadId", uploadId);
// Specify the part size. Unit: bytes. Valid values: 100 KB to 5 GB.
int partCount = 100 * 1024;
// Start the multipart upload task.
List<PartETag> partETags = new ArrayList<>();
for (int i = 1; i < 5; i++) {
byte[] data = new byte[partCount];
RandomAccessFile raf = new RandomAccessFile(localFilepath, "r");
long skip = (i-1) * partCount;
raf.seek(skip);
raf.readFully(data, 0, partCount);
UploadPartRequest uploadPart = new UploadPartRequest();
uploadPart.setBucketName(bucketName);
uploadPart.setObjectKey(objectName);
uploadPart.setUploadId(uploadId);
// Specify the part number of each part. The number starts from 1. Each part has a part number. Valid values: 1 to 10000.
uploadPart.setPartNumber(i);
uploadPart.setPartContent(data);
try {
UploadPartResult result = oss.uploadPart(uploadPart);
PartETag partETag = new PartETag(uploadPart.getPartNumber(), result.getETag());
partETags.add(partETag);
} catch (ServiceException serviceException) {
OSSLog.logError(serviceException.getErrorCode());
}
}
Collections.sort(partETags, new Comparator<PartETag>() {
@Override
public int compare(PartETag lhs, PartETag rhs) {
if (lhs.getPartNumber() < rhs.getPartNumber()) {
return -1;
} else if (lhs.getPartNumber() > rhs.getPartNumber()) {
return 1;
} else {
return 0;
}
}
});
// Complete the multipart upload task.
CompleteMultipartUploadRequest complete = new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
// Implement upload callback. You can configure the CALLBACK_SERVER parameter when you complete the multipart upload task. A callback request is sent to the specified server address after you complete the multipart upload task. You can view the servercallback result in completeResult.getServerCallbackReturnBody() of the response.
complete.setCallbackParam(new HashMap<String, String>() {
{
put("callbackUrl", CALLBACK_SERVER); // Set the CALLBACK_SERVER parameter to your server address.
put("callbackBody", "test");
}
});
CompleteMultipartUploadResult completeResult = oss.completeMultipartUpload(complete);
OSSLog.logError("-------------- serverCallback: " + completeResult.getServerCallbackReturnBody());
package main
import (
"fmt"
"log"
"os"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
)
func main() {
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.
provider, err := oss.NewEnvironmentVariableCredentialsProvider()
if err != nil {
log.Fatalf("Error: %v", err)
}
// Create an OSSClient instance.
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint.
// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
clientOptions := []oss.ClientOption{oss.SetCredentialsProvider(&provider)}
clientOptions = append(clientOptions, oss.Region("yourRegion"))
// Specify the version of the signature algorithm.
clientOptions = append(clientOptions, oss.AuthVersion(oss.AuthV4))
client, err := oss.New("yourEndpoint", "", "", clientOptions...)
if err != nil {
log.Fatalf("Error: %v", err)
}
// Specify the name of the bucket.
bucketName := "examplebucket"
// Specify the full path of the object. Do not include the bucket name in the full path.
objectName := "exampleobject.txt"
// Specify the full path of the local file. By default, if you do not specify the path of the local file, the file is uploaded from the path of the project to which the sample program belongs.
localFilename := "/localpath/exampleobject.txt"
bucket, err := client.Bucket(bucketName)
if err != nil {
log.Fatalf("Error: %v", err)
}
// Specify the part size. Unit: bytes. In this example, the part size is set to 5 MB.
partSize := int64(5 * 1024 * 1024)
// Call the multipart upload function.
if err := uploadMultipart(bucket, objectName, localFilename, partSize); err != nil {
log.Fatalf("Failed to upload multipart: %v", err)
}
}
// Specify the multipart upload function.
func uploadMultipart(bucket *oss.Bucket, objectName, localFilename string, partSize int64) error {
// Split the local file.
chunks, err := oss.SplitFileByPartSize(localFilename, partSize)
if err != nil {
return fmt.Errorf("failed to split file into chunks: %w", err)
}
// Open the local file.
file, err := os.Open(localFilename)
if err != nil {
return fmt.Errorf("failed to open file: %w", err)
}
defer file.Close()
// Step 1: Initiate a multipart upload task.
imur, err := bucket.InitiateMultipartUpload(objectName)
if err != nil {
return fmt.Errorf("failed to initiate multipart upload: %w", err)
}
// Step 2: Upload the parts.
var parts []oss.UploadPart
for _, chunk := range chunks {
part, err := bucket.UploadPart(imur, file, chunk.Size, chunk.Number)
if err != nil {
// If a part fails to be uploaded, cancel the multipart upload task.
if abortErr := bucket.AbortMultipartUpload(imur); abortErr != nil {
log.Printf("Failed to abort multipart upload: %v", abortErr)
}
return fmt.Errorf("failed to upload part: %w", err)
}
parts = append(parts, part)
}
// Set the access control list (ACL) of the object to private. By default, the object inherits the ACL of the bucket.
objectAcl := oss.ObjectACL(oss.ACLPrivate)
// Step 3: Complete the multipart upload task.
_, err = bucket.CompleteMultipartUpload(imur, parts, objectAcl)
if err != nil {
// If you fail to complete the multipart upload task, cancel the multipart upload task.
if abortErr := bucket.AbortMultipartUpload(imur); abortErr != nil {
log.Printf("Failed to abort multipart upload: %v", abortErr)
}
return fmt.Errorf("failed to complete multipart upload: %w", err)
}
log.Printf("Multipart upload completed successfully.")
return nil
}
__block NSString * uploadId = nil;
__block NSMutableArray * partInfos = [NSMutableArray new];
// Specify the name of the bucket. Example: examplebucket.
NSString * uploadToBucket = @"examplebucket";
// Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path.
NSString * uploadObjectkey = @"exampledir/exampleobject.txt";
// Use OSSInitMultipartUploadRequest to specify the name of the uploaded object and the name of the bucket in which the object is stored.
OSSInitMultipartUploadRequest * init = [OSSInitMultipartUploadRequest new];
init.bucketName = uploadToBucket;
init.objectKey = uploadObjectkey;
// init.contentType = @"application/octet-stream";
// The response to multipartUploadInit contains the upload ID. The upload ID is the unique ID of the multipart upload task.
OSSTask * initTask = [client multipartUploadInit:init];
[initTask waitUntilFinished];
if (!initTask.error) {
OSSInitMultipartUploadResult * result = initTask.result;
uploadId = result.uploadId;
// Cancel the multipart upload task or list uploaded parts based on the upload ID.
// If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.
// If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task.
//NSLog(@"UploadId": %@, uploadId);
} else {
NSLog(@"multipart upload failed, error: %@", initTask.error);
return;
}
// Specify the object that you want to upload.
NSString * filePath = @"<filepath>";
// Query the size of the object.
uint64_t fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:filePath error:nil] fileSize];
// Specify the number of parts.
int chuckCount = 10;
// Specify the part size.
uint64_t offset = fileSize/chuckCount;
for (int i = 1; i <= chuckCount; i++) {
OSSUploadPartRequest * uploadPart = [OSSUploadPartRequest new];
uploadPart.bucketName = uploadToBucket;
uploadPart.objectkey = uploadObjectkey;
uploadPart.uploadId = uploadId;
uploadPart.partNumber = i; // part number start from 1
NSFileHandle* readHandle = [NSFileHandle fileHandleForReadingAtPath:filePath];
[readHandle seekToFileOffset:offset * (i -1)];
NSData* data = [readHandle readDataOfLength:offset];
uploadPart.uploadPartData = data;
OSSTask * uploadPartTask = [client uploadPart:uploadPart];
[uploadPartTask waitUntilFinished];
if (!uploadPartTask.error) {
OSSUploadPartResult * result = uploadPartTask.result;
uint64_t fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:uploadPart.uploadPartFileURL.absoluteString error:nil] fileSize];
[partInfos addObject:[OSSPartInfo partInfoWithPartNum:i eTag:result.eTag size:fileSize]];
} else {
NSLog(@"upload part error: %@", uploadPartTask.error);
return;
}
}
OSSCompleteMultipartUploadRequest * complete = [OSSCompleteMultipartUploadRequest new];
complete.bucketName = uploadToBucket;
complete.objectKey = uploadObjectkey;
complete.uploadId = uploadId;
complete.partInfos = partInfos;
OSSTask * completeTask = [client completeMultipartUpload:complete];
[[completeTask continueWithBlock:^id(OSSTask *task) {
if (!task.error) {
OSSCompleteMultipartUploadResult * result = task.result;
// ...
} else {
// ...
}
return nil;
}] waitUntilFinished];
#include <alibabacloud/oss/OssClient.h>
#include <fstream>
int64_t getFileSize(const std::string& file)
{
std::fstream f(file, std::ios::in | std::ios::binary);
f.seekg(0, f.end);
int64_t size = f.tellg();
f.close();
return size;
}
using namespace AlibabaCloud::OSS;
int main(void)
{
/* Initialize information about the account that is used to access OSS. */
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
std::string Endpoint = "yourEndpoint";
/* Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. */
std::string Region = "yourRegion";
/* Specify the name of the bucket. Example: examplebucket. */
std::string BucketName = "examplebucket";
/* Specify the full path of the object. Do not include the bucket name in the full path of the object. Example: exampledir/exampleobject.txt. */
std::string ObjectName = "exampledir/exampleobject.txt";
/* Initialize resources such as network resources. */
InitializeSdk();
ClientConfiguration conf;
conf.signatureVersion = SignatureVersionType::V4;
/* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
OssClient client(Endpoint, credentialsProvider, conf);
client.SetRegion(Region);
InitiateMultipartUploadRequest initUploadRequest(BucketName, ObjectName);
/* (Optional) Specify the storage class. */
//initUploadRequest.MetaData().addHeader("x-oss-storage-class", "Standard");
/* Initiate the multipart upload task. */
auto uploadIdResult = client.InitiateMultipartUpload(initUploadRequest);
/* Cancel the multipart upload task or list uploaded parts based on the upload ID. */
/* If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task. */
/* If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task. */
auto uploadId = uploadIdResult.result().UploadId();
std::string fileToUpload = "yourLocalFilename";
int64_t partSize = 100 * 1024;
PartList partETagList;
auto fileSize = getFileSize(fileToUpload);
int partCount = static_cast<int>(fileSize / partSize);
/* Calculate the number of parts. */
if (fileSize % partSize != 0) {
partCount++;
}
/* Upload each part. */
for (int i = 1; i <= partCount; i++) {
auto skipBytes = partSize * (i - 1);
auto size = (partSize < fileSize - skipBytes) ? partSize : (fileSize - skipBytes);
std::shared_ptr<std::iostream> content = std::make_shared<std::fstream>(fileToUpload, std::ios::in|std::ios::binary);
content->seekg(skipBytes, std::ios::beg);
UploadPartRequest uploadPartRequest(BucketName, ObjectName, content);
uploadPartRequest.setContentLength(size);
uploadPartRequest.setUploadId(uploadId);
uploadPartRequest.setPartNumber(i);
auto uploadPartOutcome = client.UploadPart(uploadPartRequest);
if (uploadPartOutcome.isSuccess()) {
Part part(i, uploadPartOutcome.result().ETag());
partETagList.push_back(part);
}
else {
std::cout << "uploadPart fail" <<
",code:" << uploadPartOutcome.error().Code() <<
",message:" << uploadPartOutcome.error().Message() <<
",requestId:" << uploadPartOutcome.error().RequestId() << std::endl;
}
}
/* Complete the multipart upload task. */
/* When the multipart upload task is completed, you need to provide all valid PartETags. After OSS receives the PartETags, OSS verifies all parts one by one. After part verification is successful, OSS combines these parts into a complete object. */
CompleteMultipartUploadRequest request(BucketName, ObjectName);
request.setUploadId(uploadId);
request.setPartList(partETagList);
/* (Optional) Specify the ACL of the object. */
//request.setAcl(CannedAccessControlList::Private);
auto outcome = client.CompleteMultipartUpload(request);
if (!outcome.isSuccess()) {
/* Handle exceptions. */
std::cout << "CompleteMultipartUpload fail" <<
",code:" << outcome.error().Code() <<
",message:" << outcome.error().Message() <<
",requestId:" << outcome.error().RequestId() << std::endl;
return -1;
}
/* Release resources such as network resources. */
ShutdownSdk();
return 0;
}
#include "oss_api.h"
#include "aos_http_io.h"
#include <sys/stat.h>
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
const char *endpoint = "yourEndpoint";
/* Specify the name of the bucket. Example: examplebucket. */
const char *bucket_name = "examplebucket";
/* Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. */
const char *object_name = "exampledir/exampleobject.txt";
/* Specify the full path of the local file. */
const char *local_filename = "yourLocalFilename";
/* Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. */
const char *region = "yourRegion";
void init_options(oss_request_options_t *options)
{
options->config = oss_config_create(options->pool);
/* Use a char* string to initialize data of the aos_string_t type. */
aos_str_set(&options->config->endpoint, endpoint);
/* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID"));
aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET"));
// Specify two additional parameters.
aos_str_set(&options->config->region, region);
options->config->signature_version = 4;
/* Specify whether to use CNAME to access OSS. The value 0 indicates that CNAME is not used. */
options->config->is_cname = 0;
/* Configure network parameters, such as the timeout period. */
options->ctl = aos_http_controller_create(options->pool, 0);
}
int64_t get_file_size(const char *file_path)
{
int64_t filesize = -1;
struct stat statbuff;
if(stat(file_path, &statbuff) < 0){
return filesize;
} else {
filesize = statbuff.st_size;
}
return filesize;
}
int main(int argc, char *argv[])
{
/* Call the aos_http_io_initialize method in main() to initialize global resources, such as network resources and memory resources. */
if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
exit(1);
}
/* Create a memory pool to manage memory. aos_pool_t is equivalent to apr_pool_t. The code used to create a memory pool is included in the APR library. */
aos_pool_t *pool;
/* Create a memory pool. The value of the second parameter is NULL. This value indicates that the pool does not inherit other memory pools. */
aos_pool_create(&pool, NULL);
/* Create and initialize options. This parameter includes global configuration information, such as endpoint, access_key_id, access_key_secret, is_cname, and curl. */
oss_request_options_t *oss_client_options;
/* Allocate the memory resources in the memory pool to the options. */
oss_client_options = oss_request_options_create(pool);
/* Initialize oss_client_options. */
init_options(oss_client_options);
/* Initialize the parameters. */
aos_string_t bucket;
aos_string_t object;
oss_upload_file_t *upload_file = NULL;
aos_string_t upload_id;
aos_table_t *headers = NULL;
aos_table_t *complete_headers = NULL;
aos_table_t *resp_headers = NULL;
aos_status_t *resp_status = NULL;
aos_str_set(&bucket, bucket_name);
aos_str_set(&object, object_name);
aos_str_null(&upload_id);
headers = aos_table_make(pool, 1);
complete_headers = aos_table_make(pool, 1);
int part_num = 1;
/* Initiate a multipart upload task and obtain an upload ID. */
resp_status = oss_init_multipart_upload(oss_client_options, &bucket, &object, &upload_id, headers, &resp_headers);
/* Check whether the multipart upload task is initialized. */
if (aos_status_is_ok(resp_status)) {
printf("Init multipart upload succeeded, upload_id:%.*s\n",
upload_id.len, upload_id.data);
} else {
printf("Init multipart upload failed, upload_id:%.*s\n",
upload_id.len, upload_id.data);
}
/* Upload the parts. */
int64_t file_length = 0;
int64_t pos = 0;
aos_list_t complete_part_list;
oss_complete_part_content_t* complete_content = NULL;
char* part_num_str = NULL;
char* etag = NULL;
aos_list_init(&complete_part_list);
file_length = get_file_size(local_filename);
while(pos < file_length) {
upload_file = oss_create_upload_file(pool);
aos_str_set(&upload_file->filename, local_filename);
upload_file->file_pos = pos;
pos += 100 * 1024;
upload_file->file_last = pos < file_length ? pos : file_length;
resp_status = oss_upload_part_from_file(oss_client_options, &bucket, &object, &upload_id, part_num++, upload_file, &resp_headers);
/* Save the part numbers and ETags. */
complete_content = oss_create_complete_part_content(pool);
part_num_str = apr_psprintf(pool, "%d", part_num-1);
aos_str_set(&complete_content->part_number, part_num_str);
etag = apr_pstrdup(pool,
(char*)apr_table_get(resp_headers, "ETag"));
aos_str_set(&complete_content->etag, etag);
aos_list_add_tail(&complete_content->node, &complete_part_list);
if (aos_status_is_ok(resp_status)) {
printf("Multipart upload part from file succeeded\n");
} else {
printf("Multipart upload part from file failed\n");
}
}
/* Complete the multipart upload task. */
resp_status = oss_complete_multipart_upload(oss_client_options, &bucket, &object, &upload_id,
&complete_part_list, complete_headers, &resp_headers);
/* Check whether the multipart upload task is complete. */
if (aos_status_is_ok(resp_status)) {
printf("Complete multipart upload from file succeeded, upload_id:%.*s\n",
upload_id.len, upload_id.data);
} else {
printf("Complete multipart upload from file failed\n");
}
/* Release the memory pool. This operation releases the memory resources allocated for the request. */
aos_pool_destroy(pool);
/* Release the allocated global resources. */
aos_http_io_deinitialize();
return 0;
}
上記のサンプルコードは、マルチパートアップロードタスクを実行するために使用されます。 アップロード速度を向上させるために、複数のマルチパートアップロードタスクを並行して実行することもできます。 次のサンプルコードは、複数のマルチパートアップロードタスクを並行して実行する方法の例を示しています。
import os
from oss2 import determine_part_size, SizedFileAdapter
from oss2.models import PartInfo
import oss2
from concurrent.futures import ThreadPoolExecutor
from oss2.credentials import EnvironmentVariableCredentialsProvider
# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())
# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com.
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
region = "cn-hangzhou"
# Specify the name of your bucket.
bucket = oss2.Bucket(auth, endpoint, "yourbucketname", region=region)
# Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt.
key = 'exampledir/exampleobject.txt'
# Specify the full path of the local file that you want to upload.
filename = 'D:\\localpath\\examplefile.txt'
def upload_part(filename, bucket, key, upload_id, part_number, offset, num_to_upload):
"""
Upload a single part
:param filename: path of the local file that you want to upload.
:param bucket: oss2.Bucket
:param key: name of the object after the upload.
:param upload_id: unique identifier of the multipart upload task.
:param part_number: number of the part.
:param offset: starting position of the current part.
:param num_to_upload: size of the current part.
:return: PartInfo object, including the ETag and number of each part.
"""
with open(filename, 'rb') as fileobj:
fileobj.seek(offset)
result = bucket.upload_part(
key,
upload_id,
part_number,
SizedFileAdapter(fileobj, num_to_upload)
)
return PartInfo(part_number, result.etag)
def main():
"""
The main function used to upload the file.
"""
# Query the size of the file to upload.
total_size = os.path.getsize(filename)
# Set the size of each part that you want to download to 100 KB. You can adjust the size based on your business requirements.
part_size = determine_part_size(total_size, preferred_size=100 * 1024)
# Initiate a multipart upload task and obtain the upload ID in the response.
upload_id = bucket.init_multipart_upload(key).upload_id
part_info_list = []
# Create a thread pool. Set the maximum number of concurrent requests to 8. You can adjust the number of concurrent requests based on your business requirements.
max_workers = 8
with ThreadPoolExecutor(max_workers=max_workers) as executor:
futures = []
part_number = 1
offset = 0
# Iteratively upload all parts.
while offset < total_size:
num_to_upload = min(part_size, total_size - offset)
# Submit the upload task to the thread pool.
futures.append(
executor.submit(upload_part, filename, bucket, key, upload_id, part_number, offset, num_to_upload)
)
offset += num_to_upload
part_number += 1
# Obtain the results of every task (PartInfo object) add them to the part_info_list.
for future in futures:
part_info_list.append(future.result())
# Complete the multipart upload part and view the returned results.
bucket.complete_multipart_upload(key, upload_id, part_info_list)
print(f'Upload completed for {key}') # Output the name of the uploaded file.
if __name__ == "__main__":
main() # Perform the upload.
ossutilの使用
ossutilを使用してマルチパートアップロードを実行できます。 詳細については、「オブジェクトのアップロード」をご参照ください。
OSS APIの使用
ビジネスで高度なカスタマイズが必要な場合は、RESTful APIを直接呼び出すことができます。 APIを直接呼び出すには、コードに署名計算を含める必要があります。 詳細については、「InitiateMultipartUpload」をご参照ください。
よくある質問
パーツを削除するには?
マルチパートアップロードタスクが中断された場合、アップロードされたパーツはバケットに保存されます。 部品が不要になった場合は、次のいずれかの方法で部品を削除して、不要なストレージコストを防ぐことができます。
手動でパーツを削除します。 詳細については、「パーツの削除」をご参照ください。
部品を自動的に削除するライフサイクルルールを設定します。 詳細については、「ライフサイクルルールの設定」をご参照ください。
パーツをリストするには?
特定のアップロードIDを使用してアップロードされたパーツを一覧表示する場合は、ListParts操作を呼び出すことができます。 詳細については、「ListParts」をご参照ください。
開始されたが完了またはキャンセルされていないマルチパートアップロードタスクを一覧表示する場合は、ListMultipartUploads操作を呼び出すことができます。 詳細については、「ListMultipartUploads」をご参照ください。
マルチパートアップロードを使用して、暗号化および圧縮されたローカルファイルをOSSにアップロードできますか?
はい。マルチパートアップロードを使用して、暗号化および圧縮されたローカルファイルをOSSにアップロードできます。
マルチパートアップロードタスクが中断された後にパーツを再アップロードすると、アップロードされたパーツは上書きされますか?
マルチパートアップロードタスクが中断された後、元のアップロードIDを使用してすべてのパーツを再アップロードすると、同じ名前のアップロードされたパーツが上書きされます。 新しいアップロードIDを使用してすべてのパーツを再アップロードすると、元のアップロードIDを持つアップロードされたパーツが保持されます。
マルチパートアップロードのアップロードIDとは何ですか?
アップロードIDは、マルチパートアップロードタスクを一意に識別します。 部品番号は、同じアップロードIDを共有する部品の相対位置を識別します。
マルチパートアップロードタスク中にアップロードIDはどのくらい有効ですか?
アップロードIDは、マルチパートアップロードプロセス中も引き続き有効です。 マルチパートアップロードタスクがキャンセルまたは完了した場合、アップロードIDは無効になります。 別のマルチパートアップロードタスクを実行する場合は、マルチパートアップロードタスクを再開して新しいアップロードIDを生成する必要があります。
OSSはパーツの自動結合をサポートしていますか?
いいえ、OSSはパーツの自動組み合わせをサポートしていません。 CompleteMultipartUpload操作を呼び出して、パーツを完全なオブジェクトに手動で結合する必要があります。