All Products
Search
Document Center

ApsaraDB RDS:Migrate data from a self-managed SQL Server instance to an ApsaraDB RDS for SQL Server instance

Last Updated:Nov 20, 2024

To migrate data of multiple or all databases from a self-managed SQL Server instance to an ApsaraDB RDS for SQL Server instance, ApsaraDB RDS for SQL Server provides an instance-level migration method to migrate the data to an RDS instance. You need to only upload the full backup files of these databases to the same folder in an Object Storage Service (OSS) bucket, and then run the required script to migrate data to an RDS instance.

Prerequisites

  • The source database is a self-managed SQL Server instance.

  • The destination RDS instance meets the following requirements:

    • The RDS instance runs SQL Server 2008 R2, SQL Server 2012, or later. For more information about how to create an RDS instance, see Create an ApsaraDB RDS for SQL Server instance.

    • If the RDS instance runs SQL Server 2008 R2, databases are created on the RDS instance. Make sure that each database whose data you want to migrate from the self-managed SQL Server instance has a counterpart with an identical name on the RDS instance. In addition, make sure that the created databases are empty. For more information, see Create accounts and databases.

      Note

      If your RDS instance runs SQL Server 2012 or later, ignore this requirement.

    • The available storage of the RDS instance is sufficient. If the available storage is insufficient, you must expand the storage capacity of the RDS instance before you start the migration. For more information, see Change the specifications of an ApsaraDB RDS for SQL Server instance.

  • OSS is activated. For more information, see Activate OSS.

  • If you use a RAM user, make sure that the following requirements are met:

    • The AliyunOSSFullAccess and AliyunRDSFullAccess policies are attached to the RAM user. For more information about how to grant permissions to RAM users, see Use RAM to manage OSS permissions and Use RAM to manage ApsaraDB RDS permissions.

    • The service account of ApsaraDB RDS is authorized by using your Alibaba Cloud account to access the OSS bucket.

      Expand to view how to authorize a service account

      1. In the left-side navigation pane of the RDS instance details page, click Backup and Restoration. On the page that appears, click Migrate OSS Backup Data to RDS.

      2. On the Import Guide page, click Next for the first two steps to proceed to the 3. Import Data step.

        If You have authorized RDS official service account to access your OSS is displayed in the lower-left corner of the page, your service account is authorized to access the OSS bucket. Otherwise, click Authorize to authorize the service account.

        image

    • A custom policy is manually created by using your Alibaba Cloud account and is attached to the RAM user. For more information about how to create a custom policy, see the Create a custom policy on the JSON tab section in Create custom policies.

      Expand to view the custom policy

      {
          "Version": "1",
          "Statement": [
              {
                  "Action": [
                      "ram:GetRole"
                  ],
                  "Resource": "acs:ram:*:*:role/AliyunRDSImportRole",
                  "Effect": "Allow"
              }
          ]
      }

Limits

Only full backup files can be used for the data migration.

Billing

If you use the method described in this topic to migrate data, you are charged only for the use of OSS buckets.

image

Scenario

Billing rule

Upload backup files to an OSS bucket

Free of charge.

Store backup files in an OSS bucket

You are charged storage fees. For more information, visit the Pricing page of OSS.

Migrate backup files from an OSS bucket to your RDS instance

  • If you migrate backup files from an OSS bucket to your RDS instance over an internal network, no fees are generated.

  • If you migrate backup files over the Internet, you are charged for the OSS bucket based on the outbound Internet traffic. For more information, visit the Pricing page of OSS.

Preparations

  1. Install Python 2.7.18. For more information, visit the Python official website.

  2. Check whether Python 2.7.18 is installed.

    • Windows operating systems

      Run the c:\Python27\python.exe -V command to check the Python version. If Python 2.7.18 is displayed, Python 2.7.18 is installed.

      If the system prompts that the preceding command is not an internal or external command, add the Python installation path and the pip command path to the Path environment variable.

      配置Path变量

    • Mac, Linux, or Unix operating systems

      Run the python -V command to check the Python version. If Python 2.7.18 is displayed, Python 2.7.18 is installed.

  3. Use one of the following methods to install the SDK dependency package:

    Method 1: Run pip commands

    pip install aliyun-python-sdk-rds
    pip install oss2

    Method 2: Use the source code

    # Clone the API repository.
    git clone https://github.com/aliyun/aliyun-openapi-python-sdk.git
    # Install the SDK core repository of Alibaba Cloud.
    cd aliyun-python-sdk-core
    python setup.py install
    # Install the ApsaraDB RDS SDK.
    cd aliyun-python-sdk-rds
    python setup.py install
    # Clone the OSS SDK.
    git clone https://github.com/aliyun/aliyun-oss-python-sdk.git
    cd aliyun-oss-python-sdk
    # Install oss2.
    python setup.py install

Step 1: Back up all databases on the self-managed SQL Server instance

Important
  • For data consistency purposes, we recommend that you stop data writes to these databases during the full backup.

  • If you do not use the backup script to perform the full backup, the names of the generated backup files must follow the format of Database name_Backup type_Backup time.bak, such as Testdb_FULL_20180518153544.bak. If the name is in a different format, the backup fails.

  1. Download the backup script file.

  2. Double-click the backup script file to open it by using Microsoft SQL Server Management Studio (SSMS). For more information about how to use SSMS for connections, see official documentation.

  3. Configure the parameters described in the following table.

    Expand to view sample parameter settings

    SELECT
        /**
        * Databases list needed to backup, delimiter is : or ,
        * empty('') or null: means all databases excluding system database
        * example: '[testdb]: TestDR, Test, readonly'
        **/
        @backup_databases_list = N'[dtstestdata],[testdb]'
        @backup_type = N'FULL',                    -- Backup Type? FULL: FULL backup; DIFF: Differential backup; LOG: Log backup
        @backup_folder = N'C:\BACKUP'              -- Backup folder to store backup files.
        @is_run = 1                                -- Check or run? 1, run directly; 0, just check

    Parameter

    Description

    @backup_databases_list

    The name of the database that you want to back up. If you specify multiple databases, separate the names of the databases with semicolons (;) or commas (,).

    @backup_type

    The backup type. Valid values:

    • FULL: full backup

    • DIFF: incremental backup

    • LOG: log backup

    Important

    In this example, the value must be FULL.

    @backup_folder

    The directory that is used to store backup files on the self-managed database. If the specified directory does not exist, the system automatically creates one.

    @is_run

    Specifies whether to perform a backup or a check. Valid values:

    • 1: performs a backup.

    • 0: performs a check.

  4. Run the backup script to back up the specified databases and store the backup files to the specified directory.

    备份脚本执行结果

Step 2: Upload backup file to the OSS bucket

Important

If an OSS bucket is created, check whether the bucket meets the following requirements:

  • The storage class of the OSS bucket is Standard. The storage class cannot be Standard, Infrequent Access (IA), Archive, Cold Archive, or Deep Cold Archive. For more information, see Overview.

  • Data encryption is not enabled for the OSS bucket. For more information, see Data encryption.

  1. Create an OSS bucket.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. On the Buckets page, click Create Bucket.

    3. Configure the following parameters. Retain the default values for other parameters.

      Important
      • The created OSS bucket is used only for the data migration and is no longer used after the data migration is complete. You need to only configure key parameters. To prevent data leaks and excessive costs, we recommend that you delete the OSS bucket after the data migration is complete at the earliest opportunity.

      • Do not enable data encryption when you create an OSS bucket. For more information, see Data encryption.

      Parameter

      Description

      Example

      Bucket Name

      The name of the OSS bucket. The name is globally unique and cannot be modified after it is configured.

      Naming conventions:

      • The name can contain only lowercase letters, digits, and hyphens (-).

      • It must start and end with a lowercase letter or a digit.

      • The name must be 3 to 63 characters in length.

      migratetest

      Region

      The region of the OSS bucket. If you want to upload data to the OSS bucket from an Elastic Compute Service (ECS) instance over an internal network and then restore the data to the RDS instance over the internal network, make sure that the OSS bucket, ECS instance, and RDS instance reside in the same region.

      China (Hangzhou)

      Storage Class

      The storage class of the bucket. Select Standard. The cloud migration operations described in this topic cannot be performed in buckets of other storage classes.

      Standard

  2. Upload backup files to the OSS bucket.

    Note

    If the RDS instance and the OSS bucket reside in the same region, they can communicate with each other over an internal network. You can use the internal network to upload the backup data. The method is faster, and no fees are generated for Internet traffic. We recommend that you upload the backup file to an OSS bucket that is in the same region as the destination RDS instance.

    After the full backup on the self-managed SQL Server instance is complete, you must use one of the following methods to upload the generated full backup file to the OSS bucket:

    Method 1: Use the ossbrowser tool (recommended)

    1. Download ossbrowser. For more information, see Install and log on to ossbrowser.

    2. Decompress the downloaded oss-browser-win32-x64.zip package in a 64-bit Windows operating system. Then, double-click oss-browser.exe to run the program. The 64-bit Windows operating system is used as an example.

    3. On the AK Login tab, configure the AccessKeyId and AccessKeySecret parameters, retain the default values for other parameters, and then click Login.登录ossbrowser

      Note

      An AccessKey pair is used to verify the identity of an Alibaba Cloud account and ensure data security. We recommend that you keep the AccessKey pair confidential. For more information about how to create and obtain an AccessKey pair, see Create an AccessKey pair.

    4. Click the name of the OSS bucket.进入bucket中

    5. Click the 上传图标 icon, select the backup file that you want to upload, and then click Open to upload the backup file to the OSS bucket.

    Method 2: Use the OSS console

    Note

    If the size of the backup file is less than 5 GB, we recommend that you upload the backup file in the OSS console.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the bucket for which you want to configure bucket policies.网页进入bucket

    3. On the Objects page, click Upload Object.网页上传文件

    4. Drag the backup file to the Files to Upload section or click Select Files to select the backup file that you want to upload.网页扫描文件

    5. In the lower part of the page, click Upload Object to upload the backup file to the OSS bucket.

    Method 3: Call the OSS API

    Note

    If the size of the backup file is larger than 5 GB, we recommend that you call the OSS API to upload the backup file to an OSS bucket by using multipart upload.

    In this example, a Java project is used to describe how to obtain access credentials from environment variables. Before you run the sample code, make sure that the environment variables are configured. For more information about how to configure the access credentials, see Configure access credentials. For more information about sample code, see Multipart upload.

    import com.aliyun.oss.ClientException;
    import com.aliyun.oss.OSS;
    import com.aliyun.oss.common.auth.*;
    import com.aliyun.oss.OSSClientBuilder;
    import com.aliyun.oss.OSSException;
    import com.aliyun.oss.internal.Mimetypes;
    import com.aliyun.oss.model.*;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.InputStream;
    import java.util.ArrayList;
    import java.util.List;
    
    public class Demo {
    
        public static void main(String[] args) throws Exception {
            // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
            String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
            // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
            EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
            // Specify the name of the bucket. Example: examplebucket. 
            String bucketName = "examplebucket";
            // Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path of the object. 
            String objectName = "exampledir/exampleobject.txt";
            // Specify the full path of the local file that you want to upload. 
            String filePath = "D:\\localpath\\examplefile.txt";
            // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
            String region = "cn-hangzhou";
    
            // Create an OSSClient instance. 
            ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
            clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);        
            OSS ossClient = OSSClientBuilder.create()
            .endpoint(endpoint)
            .credentialsProvider(credentialsProvider)
            .clientConfiguration(clientBuilderConfiguration)
            .region(region)               
            .build();
            
            try {
                // Create an InitiateMultipartUploadRequest object. 
                InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, objectName);
    
                // The following sample code provides an example on how to specify the request headers when you initiate a multipart upload task: 
                 ObjectMetadata metadata = new ObjectMetadata();
                // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString());
                // Specify the caching behavior of the web page for the object. 
                // metadata.setCacheControl("no-cache");
                // Specify the name of the downloaded object. 
                // metadata.setContentDisposition("attachment;filename=oss_MultipartUpload.txt");
                // Specify the content encoding format of the object. 
                // metadata.setContentEncoding(OSSConstants.DEFAULT_CHARSET_NAME);
                // Specify whether existing objects are overwritten by objects that have the same names when the multipart upload task is initiated. In this example, this parameter is set to true, which indicates that existing objects cannot be overwritten by objects with the same names. 
                // metadata.setHeader("x-oss-forbid-overwrite", "true");
                // Specify the server-side encryption method that you want to use to encrypt each part of the object that you want to upload. 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
                // Specify the algorithm that you want to use to encrypt the object. If you do not specify this parameter, the object is encrypted by using AES-256. 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_DATA_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
                // Specify the ID of the customer master key (CMK) that is managed by Key Management Service (KMS). 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION_KEY_ID, "9468da86-3509-4f8d-a61e-6eab1eac****");
                // Specify the storage class of the object. 
                // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard);
                // Specify one or more tags for the object. 
                // metadata.setHeader(OSSHeaders.OSS_TAGGING, "a:1");
                // request.setObjectMetadata(metadata);
    
                // Specify ContentType based on the object type. If you do not specify this parameter, the default value of ContentType is used, which is application/oct-srream. 
                if (metadata.getContentType() == null) {
                    metadata.setContentType(Mimetypes.getInstance().getMimetype(new File(filePath), objectName));
                }
    
                // Initiate the multipart upload task. 
                InitiateMultipartUploadResult upresult = ossClient.initiateMultipartUpload(request);
                // Obtain the upload ID. 
                String uploadId = upresult.getUploadId();
                // Cancel the multipart upload task or list uploaded parts based on the upload ID. 
                // If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.  
                // If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task. 
                // System.out.println(uploadId);
    
                // partETags is a set of PartETags. A PartETag consists of the part number and ETag of an uploaded part. 
                List<PartETag> partETags =  new ArrayList<PartETag>();
                // Specify the size of each part, which is used to calculate the number of parts of the object. Unit: bytes. 
                final long partSize = 1 * 1024 * 1024L;   // Set the part size to 1 MB. 
    
                // Calculate the number of parts based on the size of the uploaded data. In the following sample code, a local file is used as an example to describe how to use the File.length() method to obtain the size of the uploaded data. 
                final File sampleFile = new File(filePath);
                long fileLength = sampleFile.length();
                int partCount = (int) (fileLength / partSize);
                if (fileLength % partSize != 0) {
                    partCount++;
                }
                // Upload all parts. 
                for (int i = 0; i < partCount; i++) {
                    long startPos = i * partSize;
                    long curPartSize = (i + 1 == partCount) ? (fileLength - startPos) : partSize;
                    UploadPartRequest uploadPartRequest = new UploadPartRequest();
                    uploadPartRequest.setBucketName(bucketName);
                    uploadPartRequest.setKey(objectName);
                    uploadPartRequest.setUploadId(uploadId);
                    // Specify the input stream of the multipart upload task. 
                    // In the following sample code, a local file is used as an example to describe how to create a FIleInputstream and use the InputStream.skip() method to skip the specified data. 
                    InputStream instream = new FileInputStream(sampleFile);
                    instream.skip(startPos);
                    uploadPartRequest.setInputStream(instream);
                    // Specify the part size. The size of each part except for the last part must be greater than or equal to 100 KB. 
                    uploadPartRequest.setPartSize(curPartSize);
                    // Specify part numbers. Each part has a part number that ranges from 1 to 10,000. If the part number that you specify does not fall within the specified range, OSS returns the InvalidArgument error code. 
                    uploadPartRequest.setPartNumber( i + 1);
                    // Parts are not necessarily uploaded in order and can be uploaded from different OSS clients. OSS sorts the parts based on the part numbers and combines the parts into a complete object. 
                    UploadPartResult uploadPartResult = ossClient.uploadPart(uploadPartRequest);
                    // Each time a part is uploaded, OSS returns a result that contains the PartETag of the part. The PartETag is stored in partETags. 
                    partETags.add(uploadPartResult.getPartETag());
                }
    
    
                // Create a CompleteMultipartUploadRequest object. 
                // When you call the CompleteMultipartUpload operation, you must provide all valid PartETags. After OSS receives the PartETags, OSS verifies all parts one by one. After all parts are verified, OSS combines these parts into a complete object. 
                CompleteMultipartUploadRequest completeMultipartUploadRequest =
                        new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
    
                // The following sample code provides an example on how to configure the access control list (ACL) of the object when you initiate a multipart upload task: 
                // completeMultipartUploadRequest.setObjectACL(CannedAccessControlList.Private);
                // Specify whether to list all parts that are uploaded by using the current upload ID. For OSS SDK for Java 3.14.0 and later, you can set PartETags in CompleteMultipartUploadRequest to null only when you list all parts uploaded to the OSS server to combine the parts into a complete object. 
                // Map<String, String> headers = new HashMap<String, String>();
                // If you set x-oss-complete-all to yes in the request, OSS lists all parts that are uploaded by using the current upload ID, sorts the parts by part number, and then performs the CompleteMultipartUpload operation. 
                // If you set x-oss-complete-all to yes in the request, the request body cannot be specified. If you specify the request body, an error is reported. 
                // headers.put("x-oss-complete-all","yes");
                // completeMultipartUploadRequest.setHeaders(headers);
    
                // Complete the multipart upload task. 
                CompleteMultipartUploadResult completeMultipartUploadResult = ossClient.completeMultipartUpload(completeMultipartUploadRequest);
                System.out.println(completeMultipartUploadResult.getETag());
            } catch (OSSException oe) {
                System.out.println("Caught an OSSException, which means your request made it to OSS, "
                        + "but was rejected with an error response for some reason.");
                System.out.println("Error Message:" + oe.getErrorMessage());
                System.out.println("Error Code:" + oe.getErrorCode());
                System.out.println("Request ID:" + oe.getRequestId());
                System.out.println("Host ID:" + oe.getHostId());
            } catch (ClientException ce) {
                System.out.println("Caught an ClientException, which means the client encountered "
                        + "a serious internal problem while trying to communicate with OSS, "
                        + "such as not being able to access the network.");
                System.out.println("Error Message:" + ce.getMessage());
            } finally {
                if (ossClient != null) {
                    ossClient.shutdown();
                }
            }
        }
    }

Step 3: Run the migration script to complete the migration task

  1. Download the migration script package.

  2. Decompress the migration script package and run the following command to view the parameters that you need to specify:

    python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -h

    A similar result is returned:

    RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>

    Parameters

    Parameter

    Description

    access_key_id

    The AccessKey ID of the Alibaba Cloud account to which the RDS instance belongs.

    access_key_secret

    The AccessKey secret of the Alibaba Cloud account to which the RDS instance belongs.

    rds_instance_id

    The ID of the RDS instance.

    oss_endpoint

    The endpoint of the OSS bucket that stores the backup files. For more information about how to obtain the endpoint, see Bucket overview.

    oss_bucket

    The name of the OSS bucket that stores the backup files.

    directory

    The folder that stores the backup files in the OSS bucket. If the backup files are stored in the root folder, enter a forward slash (/).

  3. Run the migration script to complete the migration task.

    You can run the migration script to migrate all the backup files that meet the specified conditions from the Migrationdata folder in the OSS bucket named testdatabucket to the RDS instance whose ID is rm-2zesz5774ud8s****.

    python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k LTAIQ**** -s BMkIUhroub******** -i rm-2zesz5774ud8s**** -e oss-cn-beijing.aliyuncs.com -b testdatabucket -d Migrationdata
  4. View the progress of the migration task.

    1. Log on to the ApsaraDB RDS console and go to the Instances page. In the top navigation bar, select the region in which the RDS instance resides. Then, find the RDS instance and click the ID of the instance.

    2. Perform the following steps based on the SQL Server version of your RDS instance:

      RDS SQL Server 2008 R2

      In the left-side navigation pane of the page that appears, click Database Migration to Cloud. You can view all the migration tasks that you have submitted.

      Note

      You can click Refresh in the upper-right corner of the page to view the latest status of the migration tasks.

      SQL Server 2012 and later

      In the left-side navigation pane of the page that appears, click Backup and Restoration. Then, click the Backup Data Upload History tab.

      Note

      By default, the migration records over the last seven days are displayed. You can specify a time range to view the migration tasks over the specified time range.

Common errors

Error messages and solutions

Error message

Cause

Solution

HTTP Status: 404 Error:InvalidAccessKeyId.NotFound Specified access key is not found. RequestID: XXXXXXXXXXXXXXXXX

The AccessKey ID that is used to call API operations is invalid.

Use the valid AccessKey ID and AccessKey secret. For more information, see FAQ about AccessKey pairs.

HTTP Status: 400 Error:IncompleteSignature The request signature does not conform to Aliyun standards. server string to sign is:......

The AccessKey secret that is used to call API operations is invalid.

RDS engine doesn't support, this is only for RDS SQL Server engine.

The RDS instance to which you want to migrate data does not run SQL Server.

Use an RDS instance that runs SQL Server.

Couldn't find specify RDS [XXX].

The ID of the RDS instance does not exist.

Check whether the ID of the RDS instance is valid. If the ID of the RDS instance is invalid, enter the valid instance ID.

{'status': -2, 'request-id': '', 'details': "RequestError: HTTPConnectionPool(host='xxxxxxxxxxxxxxxxx', port=80): Max retries exceeded with url: /?bucketInfo= (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10e996490>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))"}

The endpoint that is used to connect to the OSS bucket is invalid.

Check whether the endpoint that is used to connect to the OSS bucket is valid. If the endpoint is invalid, enter the valid endpoint. For more information about how to obtain the endpoint, see Bucket overview.

{'status': 404,'-id': 'xxxxxxxxx', 'details': {'HostId': 'xxxxxxxxx', 'Message': 'The specified bucket does not exist.', 'Code': 'NoSuchBucket', 'RequestId': 'xxxxxxxx', 'BucketName': 'aaaatp-test-on-ecs'}}

The OSS bucket does not exist.

Check whether the entered name of the OSS bucket is valid. If the entered name is invalid, enter the valid name.

There is no backup file on OSS Bucket [xxxxxx] under [xxxxxxxxx] folder, check please.

The required folder does not exist in the OSS bucket, or the folder does not contain the backup files that meet the specified conditions.

Check whether the folder exists in the OSS bucket and whether the folder contains the backup files that meet the specified conditions. If the folder does not exist in the OSS bucket and the folder does not contain backup files that meet the specified conditions, create the folder in the OSS bucket and import backup files that meet the specified conditions.

Warning!!!!!, [autotest_2005_ent_broken_full_dbcc_failed.bak] is not backup file, filtered.

The names of the backup files do not meet the naming conventions.

If you do not use the backup script to perform the full backup, the names of the generated backup files must follow the format of Database name_Backup type_Backup time.bak, such as Testdb_FULL_20180518153544.bak.

HTTP Status: 403 Error:Forbidden.RAM The user is not authorized to operate the specified resource, or this operation does not support RAM. RequestID: xxxxx{'status': 403, 'request-id': 'xxxx', 'details': {'HostId': 'atp-test-on-ecs.oss-cn-beijing.aliyuncs.com', 'Message': 'The bucket you visit is not belong to you.', 'Code': 'AccessDenied', 'RequestId': 'xxxx'}}

The RAM user does not have the required permissions.

Attach the AliyunOSSFullAccess and AliyunRDSFullAccess policies to the RAM user. For more information about how to authorize a RAM user, see Authorize a RAM user.

OPENAPI Response Error !!!!! : HTTP Status: <Http Status Code> Error:<Error> <Description>. RequestID: 32BB6886-775E-4BB7-A054-635664****

An error occurs when an API operation is called.

Analyze the specific error cause based on the error information that is described in Error codes HTTP Status Code Error Description Description 403 InvalidDBName The specified database name is not allowed. The error message returned because the specified database names are invalid. For example, if the name of a database is the same as the name of a system database, the name of the database is invalid. 403 IncorrectDBInstanceState Current DB instance state does not support this operation. The error message returned because the RDS instance is not in a required state. For example, the RDS instance is in the Creating state. 400 IncorrectDBInstanceType Current DB instance type does not support this operation. The error message returned because the RDS instance does not run SQL Server. 400 IncorrectDBInstanceLockMode Current DB instance lock mode does not support this operation. The error message returned because the RDS instance is in a locking state that does not support the operation. 400 InvalidDBName.NotFound Specified one or more DB name does not exist or DB status does not support. The error message returned because the specified databases cannot be found. SQL Server 2008 R2: Create databases on the RDS instance before the data migration. Make sure that each database whose data you want to migrate from the self-managed instance has a counterpart with an identical name on the RDS instance. SQL Server 2012 or later: Make sure that each database whose data you want to migrate from the self-managed instance does not have a counterpart with an identical name on the RDS instance. 400 IncorrectDBType Current DB type does not support this operation. The error message returned because the operation is not supported by the database engine that is run on the RDS instance. 400 IncorrectDBState Current DB state does not support this operation. The error message returned because the databases are being created or receiving data from another migration task. 400 UploadLimitExceeded UploadTimesQuotaExceeded: Exceeding the daily upload times of this DB. The error message returned because the number of data migration tasks that are performed on a single database on the day exceeds 20. 400 ConcurrentTaskExceeded Concurrent task exceeding the allowed amount. The error message returned because the number of data migration tasks that are performed on a single database on the day exceeds 500. 400 IncorrectFileExtension The file extension does not support. The error message returned because the file name extensions of the backup files are invalid. 400 InvalidOssUrl Specified oss url is not valid. The error message returned because the specified URL to download backup files from the OSS bucket is invalid. 400 BakFileSizeExceeded Exceeding the allowed bak file size. The error message returned because the total size of the backup files exceeds 3 TB. 400 FileSizeExceeded Exceeding the allowed file size of DB instance. The error message returned because the size of the data restored from the backup files exceeds the available storage of the RDS instance..

Error codes

HTTP Status Code

Error

Description

Description

403

InvalidDBName

The specified database name is not allowed.

The error message returned because the specified database names are invalid. For example, if the name of a database is the same as the name of a system database, the name of the database is invalid.

403

IncorrectDBInstanceState

Current DB instance state does not support this operation.

The error message returned because the RDS instance is not in a required state. For example, the RDS instance is in the Creating state.

400

IncorrectDBInstanceType

Current DB instance type does not support this operation.

The error message returned because the RDS instance does not run SQL Server.

400

IncorrectDBInstanceLockMode

Current DB instance lock mode does not support this operation.

The error message returned because the RDS instance is in a locking state that does not support the operation.

400

InvalidDBName.NotFound

Specified one or more DB name does not exist or DB status does not support.

The error message returned because the specified databases cannot be found.

  • SQL Server 2008 R2: Create databases on the RDS instance before the data migration. Make sure that each database whose data you want to migrate from the self-managed instance has a counterpart with an identical name on the RDS instance.

  • SQL Server 2012 or later: Make sure that each database whose data you want to migrate from the self-managed instance does not have a counterpart with an identical name on the RDS instance.

400

IncorrectDBType

Current DB type does not support this operation.

The error message returned because the operation is not supported by the database engine that is run on the RDS instance.

400

IncorrectDBState

Current DB state does not support this operation.

The error message returned because the databases are being created or receiving data from another migration task.

400

UploadLimitExceeded

UploadTimesQuotaExceeded: Exceeding the daily upload times of this DB.

The error message returned because the number of data migration tasks that are performed on a single database on the day exceeds 20.

400

ConcurrentTaskExceeded

Concurrent task exceeding the allowed amount.

The error message returned because the number of data migration tasks that are performed on a single database on the day exceeds 500.

400

IncorrectFileExtension

The file extension does not support.

The error message returned because the file name extensions of the backup files are invalid.

400

InvalidOssUrl

Specified oss url is not valid.

The error message returned because the specified URL to download backup files from the OSS bucket is invalid.

400

BakFileSizeExceeded

Exceeding the allowed bak file size.

The error message returned because the total size of the backup files exceeds 3 TB.

400

FileSizeExceeded

Exceeding the allowed file size of DB instance.

The error message returned because the size of the data restored from the backup files exceeds the available storage of the RDS instance.

Related operations

Operation

Description

CreateMigrateTask

Creates a data migration task.

CreateOnlineDatabaseTask

Opens the database to which backup data is migrated on an ApsaraDB RDS for SQL Server instance.

DescribeMigrateTasks

Queries the tasks that are created to migrate the backup data of an ApsaraDB RDS for SQL Server instance.

DescribeOssDownloads

Queries the backup file details of a backup data migration task for an ApsaraDB RDS for SQL Server instance.