All Products
Search
Document Center

Object Storage Service:Single-connection bandwidth throttling

Last Updated:Nov 15, 2024

This topic describes how to add parameters in an object upload or download request to set the limit of upload or download bandwidth. This ensures sufficient bandwidth for other applications.

Usage notes

  • In this topic, the public endpoint of the China (Hangzhou) region is used. If you want to access OSS from other Alibaba Cloud services in the same region as OSS, use an internal endpoint. For more information about OSS regions and endpoints, see Regions, endpoints and open ports.

  • In this topic, access credentials are obtained from environment variables. For more information about how to configure access credentials, see Configure access credentials.

  • In this topic, an OSSClient instance is created by using an OSS endpoint. If you want to create an OSSClient instance by using custom domain names or Security Token Service (STS), see Initialization.

Configure single-connection bandwidth throttling for simple upload and download

The following sample code provides an example on how to configure single-connection bandwidth throttling for simple upload and download:

# -*- coding: utf-8 -*-

import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
from oss2.models import OSS_TRAFFIC_LIMIT

# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())

# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
region = "cn-hangzhou"

# Specify the name of the bucket. 
bucket = oss2.Bucket(auth, endpoint, "examplebucket", region=region)

# Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. 
object_name = 'exampledir/exampleobject.txt'
# Specify the full path of the local file that you want to upload. Example: D:\\localpath\\examplefile.txt. By default, if you do not specify the path of the local file, the local file is uploaded from the path of the project to which the sample program belongs. 
local_file_name = 'D:\\localpath\\examplefile.txt'
# Specify the local path to which you want to download the object. If a file that has the same name already exists, the downloaded object overwrites the file. If no file that has the same name exists, the downloaded object is saved in the path. 
# If you do not specify a path for the downloaded object, the downloaded object is saved to the path of the project to which the sample program belongs. 
down_file_name = 'D:\\localpath\\exampleobject.txt'

# Configure the headers parameter to set the bandwidth limit to 100 KB/s, which is equal to 819,200 bit/s. 
limit_speed = (100 * 1024 * 8)
headers = dict()
headers[OSS_TRAFFIC_LIMIT] = str(limit_speed)

# Configure bandwidth throttling for object upload. 
result = bucket.put_object_from_file(object_name, local_file_name, headers=headers)
print('http response status:', result.status)

# Configure bandwidth throttling for object download. 
result = bucket.get_object_to_file(object_name, down_file_name, headers=headers)
print('http response status:', result.status)

Configure single-connection bandwidth throttling for multipart upload

The following sample code provides an example on how to configure single-connection bandwidth throttling for multipart upload:

# -*- coding: utf-8 -*-
import os
from oss2 import SizedFileAdapter, determine_part_size
from oss2.headers import OSS_TRAFFIC_LIMIT
from oss2.models import PartInfo
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider

# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())

# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
region = "cn-hangzhou"

# Specify the name of the bucket. 
bucket = oss2.Bucket(auth, endpoint, "examplebucket", region=region)

# Specify the full path of the object. The full path of the object cannot contain the bucket name. Example: exampledir/exampleobject.txt. 
key = 'exampledir/exampleobject.txt'
# Specify the full path of the local file that you want to upload. Example: D:\\localpath\\examplefile.txt. 
filename = 'D:\\localpath\\examplefile.txt'

total_size = os.path.getsize(filename)
# Use the determine_part_size method to determine the part size. 
part_size = determine_part_size(total_size, preferred_size=100 * 1024)

# Initiate a multipart upload task. 
upload_id = bucket.init_multipart_upload(key).upload_id
parts = []

# Configure the headers parameter to set the bandwidth limit to 100 KB/s, which is equal to 819,200 bit/s. 
limit_speed = (100 * 1024 * 8)
headers = dict()
headers[OSS_TRAFFIC_LIMIT] = str(limit_speed)

# Upload the parts one by one. 
with open(filename, 'rb') as fileobj:
    part_number = 1
    offset = 0
    while offset < total_size:
        num_to_upload = min(part_size, total_size - offset)
        # Call the SizedFileAdapter(fileobj, size) method to generate a new object and recalculate the position from which the append operation starts. 
        result = bucket.upload_part(key, upload_id, part_number,
                                    SizedFileAdapter(fileobj, num_to_upload), headers=headers)
        parts.append(PartInfo(part_number, result.etag))

        offset += num_to_upload
        part_number += 1

# Complete the multipart upload task. 
# The following code provides an example on how to configure headers when you complete the multipart upload task: 
headers = dict()
# Specify the access control list (ACL) of the object. In this example, this parameter is set to OBJECT_ACL_PRIVATE, which indicates that the ACL of the object is private. 
# headers["x-oss-object-acl"] = oss2.OBJECT_ACL_PRIVATE
bucket.complete_multipart_upload(key, upload_id, parts, headers=headers)
# bucket.complete_multipart_upload(key, upload_id, parts)

# Verify the result of the multipart upload task. 
with open(filename, 'rb') as fileobj:
    assert bucket.get_object(key).read() == fileobj.read()

Configure bandwidth throttling for upload and download that use signed URLs

The following sample code provides an example on how to configure single-connection bandwidth throttling when signed URLs are used to upload or download objects:

# -*- coding: utf-8 -*-

import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
from oss2.models import OSS_TRAFFIC_LIMIT

# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())

# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
region = "cn-hangzhou"

# Specify the name of the bucket. 
bucket = oss2.Bucket(auth, endpoint, "examplebucket", region=region)

# Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. 
object_name = 'exampledir/exampleobject.txt'
# Specify the full path of the local file that you want to upload. Example: D:\\localpath\\examplefile.txt. By default, if you do not specify the path of the local file, the local file is uploaded from the path of the project to which the sample program belongs. 
local_file_name = 'D:\\localpath\\examplefile.txt'
# Specify the local path to which you want to download the object. If a file that has the same name already exists, the downloaded object overwrites the file. If no file that has the same name exists, the downloaded object is saved in the path. 
# If you do not specify a path for the downloaded object, the downloaded object is saved to the path of the project to which the sample program belongs. 
down_file_name = 'D:\\localpath\\exampleobject.txt'

# Configure the params parameter to set the bandwidth limit to 100 KB/s, which is equal to 819,200 bit/s. 
limit_speed = (100 * 1024 * 8)
params = dict()
params[OSS_TRAFFIC_LIMIT] = str(limit_speed)

# Generate a signed URL that includes the bandwidth throttling parameter for object upload and set the validity period of the URL to 60 seconds. 
url = bucket.sign_url('PUT', object_name, 60, params=params)
print('put object url:', url)

# Configure bandwidth throttling for object upload. 
result = bucket.put_object_with_url_from_file(url, local_file_name)
print('http response status:', result.status)

# Generate a signed URL that includes the bandwidth throttling parameter for object download and set the validity period of the URL to 60 seconds. 
url = bucket.sign_url('GET', object_name, 60, params=params)
print('get object url:', url)

# Configure bandwidth throttling for object download. 
result = bucket.get_object_with_url_to_file(url, down_file_name)
print('http response status:', result.status)

References

For the complete sample code for single-connection bandwidth throttling, visit GitHub.