All Products
Search
Document Center

ApsaraDB RDS:Migrate the full backup data of a self-managed SQL Server instance to an ApsaraDB RDS instance that runs SQL Server 2008 R2 with cloud disks or runs SQL Server 2012 or later

Last Updated:Nov 19, 2024

ApsaraDB RDS for SQL Server instances allow you to easily migrate data to the cloud. You need to only upload the full backup data of the source database on your self-managed SQL Server instance to an Object Storage Service (OSS) bucket, and then migrate the full backup data to the destination database on your ApsaraDB RDS for SQL Server instance by using the ApsaraDB RDS console. You can use this feature in scenarios such as data backup, migration, and disaster recovery.

Note

Prerequisites

  • The RDS instance runs SQL Server 2008 R2 with cloud disks or runs SQL Server 2012 or later. The names of the databases on the RDS instance are different from the names of the databases that you want to migrate from the self-managed SQL Server instance. For more information about how to create an RDS instance, see Create an ApsaraDB RDS for SQL Server instance.

    Note

    ApsaraDB RDS instances that run SQL Server 2008 R2 with cloud disks are no longer available for purchase. For more information, see [EOS/Discontinuation] ApsaraDB RDS instances running SQL Server 2008 R2 with cloud disks are no longer available for purchase from July 14, 2023.

  • The available storage of the RDS instance is sufficient. If the available storage is insufficient, you must expand the storage capacity of the RDS instance before you start the migration. For more information, see Change the specifications of an ApsaraDB RDS for SQL Server instance.

  • A privileged account is created for the RDS instance. For more information, see Create accounts and databases.

  • The DBCC CHECKDB statement is executed to ensure that no allocation errors or consistency errors occur in the self-managed SQL Server instance. If no allocation or consistency errors occur, the following execution result is returned:

    ...
    CHECKDB found 0 allocation errors and 0 consistency errors in database 'xxx'.
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
  • OSS is activated. For more information, see Activate OSS.

  • If you use a RAM user, make sure that the following requirements are met:

    • The AliyunOSSFullAccess and AliyunRDSFullAccess policies are attached to the RAM user. For more information about how to grant permissions to RAM users, see Use RAM to manage OSS permissions and Use RAM to manage ApsaraDB RDS permissions.

    • The service account of ApsaraDB RDS is authorized by using your Alibaba Cloud account to access the OSS bucket.

      Expand to view how to authorize a service account

      1. In the left-side navigation pane of the RDS instance details page, click Backup and Restoration. On the page that appears, click Migrate OSS Backup Data to RDS.

      2. On the Import Guide page, click Next for the first two steps to proceed to the 3. Import Data step.

        If You have authorized RDS official service account to access your OSS is displayed in the lower-left corner of the page, your service account is authorized to access the OSS bucket. Otherwise, click Authorize to authorize the service account.

        image

    • A custom policy is manually created by using your Alibaba Cloud account and is attached to the RAM user. For more information about how to create a custom policy, see the Create a custom policy on the JSON tab section in Create custom policies.

      Expand to view the custom policy

      {
          "Version": "1",
          "Statement": [
              {
                  "Action": [
                      "ram:GetRole"
                  ],
                  "Resource": "acs:ram:*:*:role/AliyunRDSImportRole",
                  "Effect": "Allow"
              }
          ]
      }

Usage notes

  • The migration method that is described in this topic is at the database level. You can migrate the full backup data only of a single self-managed database to your RDS instance at a time. If you want to migrate the backup data of multiple or all databases in a self-managed SQL Server instance at a time, we recommend that you use an instance-level migration method. For more information, see Migrate data from a self-managed SQL Server instance to an ApsaraDB RDS for SQL Server instance.

  • You cannot migrate the backup files of a self-managed SQL Server instance that runs a later database engine version than the RDS instance. For example, you cannot migrate the backup files of a self-managed instance that runs SQL Server 2016 to an RDS instance that runs SQL Server 2012.

  • Differential backup files and log backup files are not supported.

  • To avoid data migration failures, make sure that the name of a full backup file does not contain the following special characters: ! @ # $ % ^ & * ( ) _ + - =

  • After you authorize the service account of ApsaraDB RDS to access the OSS bucket, a role named AliyunRDSImportRole is created in RAM. Do not modify or delete this role. If you modify or delete this role, the backup files cannot be downloaded from the OSS bucket. In this case, you must re-authorize the service account by using the migration wizard.

  • The RDS instance does not carry over the accounts of the self-managed SQL Server instance. After the migration is complete, you must create accounts for the RDS instance in the ApsaraDB RDS console.

  • Before the migration is complete, do not delete the full backup file from the OSS bucket. If you delete the full backup file before the migration is complete, the migration fails.

  • The name of a backup file must be suffixed by bak, diff, trn, or log. The following list describes the suffixes:

    • bak: indicates a full backup file.

    • diff: indicates a differential backup file.

    • trn or log: indicates a log backup file of transactions.

    Note
    • If the backup files do not use the preceding suffixes, the system may fail to identify the types of the backup files. This affects subsequent operations.

    • By default, the full backup files of the RDS instance are in the ZIP format. If you download a ZIP file, you must decompress the ZIP file to obtain a full backup file whose name is suffixed by bak. Then, you can use the full backup file to migrate the data to your RDS instance.

Billing

If you use the method described in this topic to migrate data, you are charged only for the use of OSS buckets.

image

Scenario

Billing rule

Upload backup files to an OSS bucket

Free of charge.

Store backup files in an OSS bucket

You are charged storage fees. For more information, visit the Pricing page of OSS.

Migrate backup files from an OSS bucket to your RDS instance

  • If you migrate backup files from an OSS bucket to your RDS instance over an internal network, no fees are generated.

  • If you migrate backup files over the Internet, you are charged for the OSS bucket based on the outbound Internet traffic. For more information, visit the Pricing page of OSS.

Step 1: Back up the source database on the self-managed instance

Note

Before you perform a full backup, you must stop all data writes to the source database on the self-managed instance. The data that is written to the self-managed database during the full backup process cannot be backed up.

  1. Download the backup script and use SQL Server Management Studio (SSMS) to open the backup script.

  2. Modify parameters based on your business requirements. The following table describes the parameters in the backup script.

    Parameter

    Description

    @backup_databases_list

    The name of the database that you want to back up. If you specify multiple databases, separate the names of the databases with semicolons (;) or commas (,).

    @backup_type

    The backup type. Valid values:

    • FULL: full backup

    • DIFF: incremental backup

    • LOG: log backup

    @backup_folder

    The directory that is used to store backup files on the self-managed database. If the specified directory does not exist, the system automatically creates one.

    @is_run

    Specifies whether to perform a backup or a check. Valid values:

    • 1: performs a backup.

    • 0: performs a check.

    Note

    Modify the preceding parameters in the SELECT statement in the YOU HAVE TO INIT PUBLIC VARIABLES HERE section of the backup script.

  3. Run the backup script.

Step 2: Upload backup file to the OSS bucket

Important

If an OSS bucket is created, check whether the bucket meets the following requirements:

  • The storage class of the OSS bucket is Standard. The storage class cannot be Standard, Infrequent Access (IA), Archive, Cold Archive, or Deep Cold Archive. For more information, see Overview.

  • Data encryption is not enabled for the OSS bucket. For more information, see Data encryption.

  1. Create an OSS bucket.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. On the Buckets page, click Create Bucket.

    3. Configure the following parameters. Retain the default values for other parameters.

      Important
      • The created OSS bucket is used only for the data migration and is no longer used after the data migration is complete. You need to only configure key parameters. To prevent data leaks and excessive costs, we recommend that you delete the OSS bucket after the data migration is complete at the earliest opportunity.

      • Do not enable data encryption when you create an OSS bucket. For more information, see Data encryption.

      Parameter

      Description

      Example

      Bucket Name

      The name of the OSS bucket. The name is globally unique and cannot be modified after it is configured.

      Naming conventions:

      • The name can contain only lowercase letters, digits, and hyphens (-).

      • It must start and end with a lowercase letter or a digit.

      • The name must be 3 to 63 characters in length.

      migratetest

      Region

      The region of the OSS bucket. If you want to upload data to the OSS bucket from an Elastic Compute Service (ECS) instance over an internal network and then restore the data to the RDS instance over the internal network, make sure that the OSS bucket, ECS instance, and RDS instance reside in the same region.

      China (Hangzhou)

      Storage Class

      The storage class of the bucket. Select Standard. The cloud migration operations described in this topic cannot be performed in buckets of other storage classes.

      Standard

  2. Upload backup files to the OSS bucket.

    Note

    If the RDS instance and the OSS bucket reside in the same region, they can communicate with each other over an internal network. You can use the internal network to upload the backup data. The method is faster, and no fees are generated for Internet traffic. We recommend that you upload the backup file to an OSS bucket that is in the same region as the destination RDS instance.

    After the full backup on the self-managed SQL Server instance is complete, you must use one of the following methods to upload the generated full backup file to the OSS bucket:

    Method 1: Use the ossbrowser tool (recommended)

    1. Download ossbrowser. For more information, see Install and log on to ossbrowser.

    2. Decompress the downloaded oss-browser-win32-x64.zip package in a 64-bit Windows operating system. Then, double-click oss-browser.exe to run the program. The 64-bit Windows operating system is used as an example.

    3. On the AK Login tab, configure the AccessKeyId and AccessKeySecret parameters, retain the default values for other parameters, and then click Login.登录ossbrowser

      Note

      An AccessKey pair is used to verify the identity of an Alibaba Cloud account and ensure data security. We recommend that you keep the AccessKey pair confidential. For more information about how to create and obtain an AccessKey pair, see Create an AccessKey pair.

    4. Click the name of the OSS bucket.进入bucket中

    5. Click the 上传图标 icon, select the backup file that you want to upload, and then click Open to upload the backup file to the OSS bucket.

    Method 2: Use the OSS console

    Note

    If the size of the backup file is less than 5 GB, we recommend that you upload the backup file in the OSS console.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the bucket for which you want to configure bucket policies.网页进入bucket

    3. On the Objects page, click Upload Object.网页上传文件

    4. Drag the backup file to the Files to Upload section or click Select Files to select the backup file that you want to upload.网页扫描文件

    5. In the lower part of the page, click Upload Object to upload the backup file to the OSS bucket.

    Method 3: Call the OSS API

    Note

    If the size of the backup file is larger than 5 GB, we recommend that you call the OSS API to upload the backup file to an OSS bucket by using multipart upload.

    In this example, a Java project is used to describe how to obtain access credentials from environment variables. Before you run the sample code, make sure that the environment variables are configured. For more information about how to configure the access credentials, see Configure access credentials. For more information about sample code, see Multipart upload.

    import com.aliyun.oss.ClientException;
    import com.aliyun.oss.OSS;
    import com.aliyun.oss.common.auth.*;
    import com.aliyun.oss.OSSClientBuilder;
    import com.aliyun.oss.OSSException;
    import com.aliyun.oss.internal.Mimetypes;
    import com.aliyun.oss.model.*;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.InputStream;
    import java.util.ArrayList;
    import java.util.List;
    
    public class Demo {
    
        public static void main(String[] args) throws Exception {
            // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
            String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
            // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
            EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
            // Specify the name of the bucket. Example: examplebucket. 
            String bucketName = "examplebucket";
            // Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path of the object. 
            String objectName = "exampledir/exampleobject.txt";
            // Specify the full path of the local file that you want to upload. 
            String filePath = "D:\\localpath\\examplefile.txt";
            // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
            String region = "cn-hangzhou";
    
            // Create an OSSClient instance. 
            ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
            clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);        
            OSS ossClient = OSSClientBuilder.create()
            .endpoint(endpoint)
            .credentialsProvider(credentialsProvider)
            .clientConfiguration(clientBuilderConfiguration)
            .region(region)               
            .build();
            
            try {
                // Create an InitiateMultipartUploadRequest object. 
                InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, objectName);
    
                // The following sample code provides an example on how to specify the request headers when you initiate a multipart upload task: 
                 ObjectMetadata metadata = new ObjectMetadata();
                // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString());
                // Specify the caching behavior of the web page for the object. 
                // metadata.setCacheControl("no-cache");
                // Specify the name of the downloaded object. 
                // metadata.setContentDisposition("attachment;filename=oss_MultipartUpload.txt");
                // Specify the content encoding format of the object. 
                // metadata.setContentEncoding(OSSConstants.DEFAULT_CHARSET_NAME);
                // Specify whether existing objects are overwritten by objects that have the same names when the multipart upload task is initiated. In this example, this parameter is set to true, which indicates that existing objects cannot be overwritten by objects with the same names. 
                // metadata.setHeader("x-oss-forbid-overwrite", "true");
                // Specify the server-side encryption method that you want to use to encrypt each part of the object that you want to upload. 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
                // Specify the algorithm that you want to use to encrypt the object. If you do not specify this parameter, the object is encrypted by using AES-256. 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_DATA_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
                // Specify the ID of the customer master key (CMK) that is managed by Key Management Service (KMS). 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION_KEY_ID, "9468da86-3509-4f8d-a61e-6eab1eac****");
                // Specify the storage class of the object. 
                // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard);
                // Specify one or more tags for the object. 
                // metadata.setHeader(OSSHeaders.OSS_TAGGING, "a:1");
                // request.setObjectMetadata(metadata);
    
                // Specify ContentType based on the object type. If you do not specify this parameter, the default value of ContentType is used, which is application/oct-srream. 
                if (metadata.getContentType() == null) {
                    metadata.setContentType(Mimetypes.getInstance().getMimetype(new File(filePath), objectName));
                }
    
                // Initiate the multipart upload task. 
                InitiateMultipartUploadResult upresult = ossClient.initiateMultipartUpload(request);
                // Obtain the upload ID. 
                String uploadId = upresult.getUploadId();
                // Cancel the multipart upload task or list uploaded parts based on the upload ID. 
                // If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.  
                // If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task. 
                // System.out.println(uploadId);
    
                // partETags is a set of PartETags. A PartETag consists of the part number and ETag of an uploaded part. 
                List<PartETag> partETags =  new ArrayList<PartETag>();
                // Specify the size of each part, which is used to calculate the number of parts of the object. Unit: bytes. 
                final long partSize = 1 * 1024 * 1024L;   // Set the part size to 1 MB. 
    
                // Calculate the number of parts based on the size of the uploaded data. In the following sample code, a local file is used as an example to describe how to use the File.length() method to obtain the size of the uploaded data. 
                final File sampleFile = new File(filePath);
                long fileLength = sampleFile.length();
                int partCount = (int) (fileLength / partSize);
                if (fileLength % partSize != 0) {
                    partCount++;
                }
                // Upload all parts. 
                for (int i = 0; i < partCount; i++) {
                    long startPos = i * partSize;
                    long curPartSize = (i + 1 == partCount) ? (fileLength - startPos) : partSize;
                    UploadPartRequest uploadPartRequest = new UploadPartRequest();
                    uploadPartRequest.setBucketName(bucketName);
                    uploadPartRequest.setKey(objectName);
                    uploadPartRequest.setUploadId(uploadId);
                    // Specify the input stream of the multipart upload task. 
                    // In the following sample code, a local file is used as an example to describe how to create a FIleInputstream and use the InputStream.skip() method to skip the specified data. 
                    InputStream instream = new FileInputStream(sampleFile);
                    instream.skip(startPos);
                    uploadPartRequest.setInputStream(instream);
                    // Specify the part size. The size of each part except for the last part must be greater than or equal to 100 KB. 
                    uploadPartRequest.setPartSize(curPartSize);
                    // Specify part numbers. Each part has a part number that ranges from 1 to 10,000. If the part number that you specify does not fall within the specified range, OSS returns the InvalidArgument error code. 
                    uploadPartRequest.setPartNumber( i + 1);
                    // Parts are not necessarily uploaded in order and can be uploaded from different OSS clients. OSS sorts the parts based on the part numbers and combines the parts into a complete object. 
                    UploadPartResult uploadPartResult = ossClient.uploadPart(uploadPartRequest);
                    // Each time a part is uploaded, OSS returns a result that contains the PartETag of the part. The PartETag is stored in partETags. 
                    partETags.add(uploadPartResult.getPartETag());
                }
    
    
                // Create a CompleteMultipartUploadRequest object. 
                // When you call the CompleteMultipartUpload operation, you must provide all valid PartETags. After OSS receives the PartETags, OSS verifies all parts one by one. After all parts are verified, OSS combines these parts into a complete object. 
                CompleteMultipartUploadRequest completeMultipartUploadRequest =
                        new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
    
                // The following sample code provides an example on how to configure the access control list (ACL) of the object when you initiate a multipart upload task: 
                // completeMultipartUploadRequest.setObjectACL(CannedAccessControlList.Private);
                // Specify whether to list all parts that are uploaded by using the current upload ID. For OSS SDK for Java 3.14.0 and later, you can set PartETags in CompleteMultipartUploadRequest to null only when you list all parts uploaded to the OSS server to combine the parts into a complete object. 
                // Map<String, String> headers = new HashMap<String, String>();
                // If you set x-oss-complete-all to yes in the request, OSS lists all parts that are uploaded by using the current upload ID, sorts the parts by part number, and then performs the CompleteMultipartUpload operation. 
                // If you set x-oss-complete-all to yes in the request, the request body cannot be specified. If you specify the request body, an error is reported. 
                // headers.put("x-oss-complete-all","yes");
                // completeMultipartUploadRequest.setHeaders(headers);
    
                // Complete the multipart upload task. 
                CompleteMultipartUploadResult completeMultipartUploadResult = ossClient.completeMultipartUpload(completeMultipartUploadRequest);
                System.out.println(completeMultipartUploadResult.getETag());
            } catch (OSSException oe) {
                System.out.println("Caught an OSSException, which means your request made it to OSS, "
                        + "but was rejected with an error response for some reason.");
                System.out.println("Error Message:" + oe.getErrorMessage());
                System.out.println("Error Code:" + oe.getErrorCode());
                System.out.println("Request ID:" + oe.getRequestId());
                System.out.println("Host ID:" + oe.getHostId());
            } catch (ClientException ce) {
                System.out.println("Caught an ClientException, which means the client encountered "
                        + "a serious internal problem while trying to communicate with OSS, "
                        + "such as not being able to access the network.");
                System.out.println("Error Message:" + ce.getMessage());
            } finally {
                if (ossClient != null) {
                    ossClient.shutdown();
                }
            }
        }
    }

Step 3: Create a migration task

  1. Go to the Instances page. In the top navigation bar, select the region in which the RDS instance resides. Then, find the RDS instance and click the ID of the instance.

  2. In the left-side navigation pane, click Backup and Restoration.

  3. In the upper-right corner of the page, click Migrate OSS Backup Data to RDS.

  4. In the Import Guide wizard, click Next twice.

    Note

    If you use the OSS-based migration wizard for the first time, you must authorize the service account of ApsaraDB RDS to access the OSS bucket. In this case, you must click Authorize and complete the authorization. Otherwise, the OSS Bucket drop-down list in the Import Data step is empty.

  5. Configure the following parameters.

    Parameter

    Description

    Database Name

    Enter the name of the destination database on your RDS instance. The destination database is used to store the data that is migrated from the source database on the self-managed SQL Server instance. The name of the destination database must meet the requirements of open source SQL Server.

    Important
    • Before migration, you must make sure that the names of the databases on the destination RDS instance are different from the name of the database that you want to restore by using the specified backup file. In addition, make sure that database files with the same name as the database that you want to restore by using the specified backup file are not added to the databases on the destination RDS instance. If both preceding requirements are met, you can restore the database by using a database file in the backup set. Note that the database file must have the same name as the database that you want to restore.

    • If one of the preceding requirements is not met, the migration fails.

    OSS Bucket

    The OSS bucket that stores the full backup file.

    OSS Subfolder Name

    The name of the OSS subfolder that stores the full backup file.

    OSS File

    The full backup file that you want to import. You can enter a prefix in the search box and click the 放大镜 icon to search for the full backup file in fuzzy match mode. The name, size, and update time of each full backup file whose name contains the prefix are displayed. Select the backup file that you want to migrate to the cloud.

    Cloud Migration Method

    • Immediate Access (Full Backup): If you want to migrate only a full backup file, select this migration plan. For this example, select Immediate Access (Full Backup). In this case, the following parameter settings take effect in the CreateMigrateTask operation: BackupMode = FULL and IsOnlineDB = True.

    • Access Pending (Incremental Backup): If you want to migrate a full backup file and a log or differential backup file, select this migration method. In this case, the following parameter setting takes effect in the CreateMigrateTask operation: BackupMode = UPDF and IsOnlineDB = False.

    Consistency Check Mode

    • Asynchronous DBCC: The DBCC CHECKDB statement is executed after the destination database is opened. This reduces the time that is required to open the destination database and minimizes the downtime of your application. If the destination database is large, a long period of time is required to execute the DBCC CHECKDB statement. If your application is sensitive to downtime but insensitive to the result of the DBCC CHECKDB statement, we recommend that you select this consistency check mode. In this case, the following parameter setting takes effect in the CreateMigrateTask operation: CheckDBMode = AsyncExecuteDBCheck.

    • Synchronous DBCC: The DBCC CHECKDB statement is executed at the same time when the destination database is opened. If you want to identify consistency errors between the source database and the destination database based on the result of the DBCC CHECKDB statement, we recommend that you select this consistency check mode. However, the time that is required to open the destination database increases. In this case, the following parameter setting takes effect in the CreateMigrateTask operation: CheckDBMode = SyncExecuteDBCheck.

  6. Click OK.

    Wait until the migration task is complete. You can click Refresh to view the latest state of the migration task. If the migration task fails, you can troubleshoot the failure based on the description of the migration task. For more information, see Common errors.

Step 4: View the migration task

If you want to view details about the migration task, go to the Backup and Restoration page and click the Cloud Migration Records of Backup Data tab. By default, this tab displays the migration tasks over the last week.

Common errors

Each migration task record on the Backup Data Upload History tab of the Backup and Restoration page contains the description of a task. If the migration task fails or an error is reported, you can troubleshoot the failure or error based on the task description. The following common errors may occur:

  • An existing database on the RDS instance has the same name as the source database.

    • Error message: The database (xxx) is already exist on RDS, please backup and drop it, then try again.

    • Cause: If an existing database on the RDS instance has the same name as the source database, the migration is not supported. This mechanism is designed to ensure the security of your data.

    • Solution: If you want to overwrite an existing database on the RDS instance, back up the database, delete the database from the RDS instance, and then create and run a migration task again.

  • A differential backup file is used.

    • Error message: Backup set (xxx.bak) is a Database Differential backup, we only accept a FULL Backup.

    • Cause: The file that you upload is a differential backup file rather than a full backup file. The migration method in this topic supports only full backup files.

  • A log backup file is used.

    • Error message: Backup set (xxx.trn) is a Transaction Log backup, we only accept a FULL Backup.

    • Cause: The file that you upload is a log backup file rather than a full backup file. The migration method in this topic supports only full backup files.

  • The full backup file fails the verification.

    • Error message: Failed to verify xxx.bak, backup file was corrupted or newer edition than RDS.

    • Cause: The full backup file is corrupted, or the self-managed instance runs a later SQL Server version than the RDS instance. For example, this error occurs if the self-managed instance runs SQL Server 2016 and the RDS instance runs SQL Server 2012.

    • Solution: If the full backup file is corrupted, perform a full backup on the self-managed SQL Server instance again. Then, create and run a migration task again. If the self-managed instance runs a later SQL Server version than the RDS instance, select a different RDS instance that runs the same version as or a later version than the self-managed instance.

      Note

      To upgrade the major engine version of an existing RDS for SQL Server instance, see Upgrade the major engine version.

  • The DBCC CHECKDB statement fails.

    • Error message: DBCC checkdb failed.

    • Cause: The self-managed SQL Server instance encounters allocation or consistency errors.

    • Solution: Execute the following statement on the self-managed SQL Server instance to fix the error. Then, create and run a migration task again.

      Important

      If you use this solution to fix the issue, your data may be lost.

      DBCC CHECKDB (DBName, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS, ALL_ERRORMSGS
  • The available storage of the RDS instance is insufficient.

    • Error 1

      • Error message: Not Enough Disk Space for restoring, space left (xxx MB) < needed (xxx MB).

      • Cause: The available storage of the RDS instance is less than the minimum storage that is required to restore data by using the full backup file.

      • Solution: Expand the storage capacity of the RDS instance. For more information, see Change the specifications of an ApsaraDB RDS for SQL Server instance.

    • Error 2

  • No privileged account is created on the RDS instance.

    • Error message: Your RDS doesn't have any init account yet, please create one and grant permissions on RDS console to this migrated database (XXX).

    • Cause: No privileged account is created on the RDS instance. As a result, the migration task cannot find the account that requires authorization. However, the full backup file has been restored to the RDS instance, and the migration task is successful.

    • Solution: Create a privileged account on the RDS instance. For more information, see Create a privileged account or a standard account.

  • The RAM user does not have the required permissions.

    • Error 1

      • Error: The parameters that are described in the substep 5 of Step 3: Create a migration task are correctly configured, but the OK button is dimmed.

      • Cause: You are using a RAM user that does not have the required permissions.

      • Solution: Refer to the "Prerequisites" section of this topic to grant required permissions to the RAM user.

    • Error 2

      • Error: A message appears to indicate that the RAM user does not have permissions to grant the AliyunRDSImportRole permission.

      • Cause: The RAM user that you are using does not have the required permissions.

      • Solution: Use your Alibaba Cloud account to grant the AliyunRAMFullAccess permission to your RAM user. For more information, see Use RAM to manage ApsaraDB RDS permissions.

Related operations

Operation

Description

CreateMigrateTask

Creates a data migration task.

CreateOnlineDatabaseTask

Opens the database to which backup data is migrated on an ApsaraDB RDS for SQL Server instance.

DescribeMigrateTasks

Queries the tasks that are created to migrate the backup data of an ApsaraDB RDS for SQL Server instance.

DescribeOssDownloads

Queries the backup file details of a backup data migration task for an ApsaraDB RDS for SQL Server instance.