All Products
Search
Document Center

Tablestore:Export full data from Tablestore to OSS

Last Updated:Nov 28, 2024

If you want to back up full data in Tablestore in a cost-effective manner or export Tablestore data as a file to your local device, you can use DataWorks Data Integration to export full data from Tablestore to Object Storage Service (OSS). After the full data is exported to OSS, you can download the OSS object that contains the exported Tablestore data to your local device.

Usage notes

This feature is applicable to the Wide Column model and TimeSeries model of Tablestore.

  • Wide Column model: You can use the codeless user interface (UI) or code editor to export data from a data table in Tablestore to OSS.

  • TimeSeries model: You can use only the code editor to export data from a time series table in Tablestore to OSS.

Prerequisites

  • OSS is activated and an OSS bucket is created. For more information, see Activate OSS and Create buckets.

  • The information about the instances, data tables, or time series tables whose data you want to synchronize from Tablestore to OSS is confirmed and recorded.

  • DataWorks is activated and a workspace is created. For more information, see Activate DataWorks and Create a workspace.

  • A Resource Access Management (RAM) user is created and granted the AliyunOSSFullAccess and AliyunOTSFullAccess permissions. AliyunOSSFullAccess and AliyunOTSFullAccess grants full permissions on OSS and Tablestore. For more information, see Create a RAM user and Grant permissions to a RAM user.

    Important

    To prevent security risks caused by the leakage of the AccessKey pair of your Alibaba Cloud account, we recommend that you use the AccessKey pair of a RAM user.

  • An AccessKey pair is created for the RAM user. For more information, see Create an AccessKey pair.

Step 1: Add a Tablestore data source

To add a Tablestore database as the data source, perform the following steps.

  1. Go to the Data Integration page.

    Log on to the DataWorks console, select a region in the upper-left corner, choose Data Development and Governance > Data Integration, select a workspace from the drop-down list, and then click Go to Data Integration.

  2. In the left-side navigation pane, click Data Source.

  3. On the Data Source page, click Add Data Source.

  4. In the Add Data Source dialog box, click the Tablestore block.

  5. In the Add OTS data source dialog box, configure the parameters that are described in the following table.

    Parameter

    Description

    Data Source Name

    The name of the data source. The name can contain letters, digits, and underscores (_), and must start with a letter.

    Data Source Description

    The description of the data source. The description cannot exceed 80 characters in length.

    Endpoint

    The endpoint of the Tablestore instance. For more information, see Endpoints.

    If the Tablestore instance and the resources of the destination data source are in the same region, enter a virtual private cloud (VPC) endpoint. Otherwise, enter a public endpoint.

    Table Store instance name

    The name of the Tablestore instance. For more information, see Instance.

    AccessKey ID

    The AccessKey ID and AccessKey secret of your Alibaba Cloud account or RAM user. For more information about how to create an AccessKey pair, see Create an AccessKey pair.

    AccessKey Secret

  6. Test the network connectivity between the data source and the resource group that you select.

    To ensure that your synchronization nodes run as expected, you need to test the connectivity between the data source and all types of resource groups on which your synchronization nodes will run.

    Important

    A synchronization task can use only one type of resource group. By default, only shared resource groups for Data Integration are displayed in the resource group list. To ensure the stability and performance of data synchronization, we recommend that you use an exclusive resource group for Data Integration.

    1. Click Purchase to create a new resource group or click Associate Purchased Resource Group to associate an existing resource group. For more information, see Create and use an exclusive resource group for Data Integration.

    2. After the resource group is started, click Test Network Connectivity in the Connection Status (Production Environment) column of the resource group.

      If Connected is displayed, the connectivity test is passed.

  7. If the data source passes the network connectivity test, click Complete.

    The newly created data source is displayed in the data source list.

Step 2: Add an OSS data source

The procedure is similar to that in Step 1. In Step 2, click OSS in the Add data source dialog box.

In this example, the OSS data source is named OTS2OSS, as shown in the following figure.

Important
  • When you configure parameters for an OSS data source, make sure that the endpoint does not contain the name of the specified OSS bucket and starts with http:// or https://.

  • You can select RAM role authorization mode or Access Key mode as the access mode of the OSS data source.

    • Access Key mode: You can use the AccessKey pair of an Alibaba Cloud account or a RAM user to access the OSS data source.

    • RAM role authorization mode: DataWorks can assume related roles to access the OSS data source by using Security Token Service (STS) tokens. This ensures higher security. For more information, see Use the RAM role-based authorization mode to add a data source.

      If you select RAM role authorization mode as Access Mode for the first time, the Warning dialog box appears to prompt you to create a service-linked role for DataWorks. Click Enable authorization. Then, select the service-linked role that you created for DataWorks and complete authorization.

image.png

Step 3: Create a batch synchronization node

  1. Go to the DataStudio console.

    Log on to the DataWorks console, select a region in the upper-left corner, choose Data Development and Governance > Data Development, select a workspace from the drop-down list, and then click Go to DataStudio.

  2. On the Scheduled Workflow page of the DataStudio console, click Business Flow and select a business flow.

    For information about how to create a workflow, see Create a workflow.

  3. Right-click the Data Integration node and choose Create Node > Offline synchronization.

  4. In the Create Node dialog box, select a path and enter a node name.

  5. Click Confirm.

    The newly created offline synchronization node will be displayed under the Data Integration node.

Step 4: Configure and start a batch synchronization task

When you configure a task to synchronize full data from Tablestore to OSS, select a task configuration method based on the data storage model that you use.

Configure a task to synchronize data from a data table

  1. Double-click the created batch synchronization node in the Data Integration folder.

  2. Establish network connections between the resource group and data sources.

    Select the source and destination data sources for the data synchronization task and the resource group that is used to run the data synchronization task. Establish network connections between the resource group and data sources and test the connectivity.

    Important

    Data synchronization tasks are run by using resource groups. Select a resource group and make sure that network connections between the resource group and data sources are established.

    1. In the Configure Network Connections and Resource Group step, set the Source parameter to Tablestore and the Data Source Name parameter to the name of the Tablestore data source that you added in Step 1: Add a Tablestore data source.

    2. Select a resource group.

      After you select a resource group, the system displays the region and specifications of the resource group and automatically tests the connectivity between the resource group and the source data source.

      Important

      Make sure that the resource group is the same as that you selected when you added the data source.

    3. Set the Destination parameter to OSS and the Data Source Name parameter to the name of the OSS data source that you added in Step 2: Add an OSS data source.

      The system automatically tests the connectivity between the resource group and the destination data source.

    4. After the network connectivity test is passed, click Next.

  3. Configure and save the batch synchronization task.

    You can configure a task to synchronize data from a data table by using the codeless UI or code editor based on your business requirements.

    (Recommended) Use the codeless UI

    1. In the Configure Source and Destination section of the Configure tasks step, configure the source and destination data sources based on the actual case.

      Configure the source data source

      Parameter

      Description

      Table

      The name of the Tablestore data table.

      Range of Primary Key(begin)

      The start primary key and the end primary key that are used to specify the range of the data that you want to read. The values must be a JSON array.

      The start primary key and the end primary key must be valid primary keys or virtual points that consist of values of the INF_MIN and INF_MAX types. The number of columns in the virtual points must be the same as the number of primary key columns.

      The INF_MIN type specifies an infinitely small value. All values of other types are greater than the value of the INF_MIN type. The INF_MAX type specifies an infinitely large value. All values of other types are less than the value of the INF_MAX type.

      The rows in the data table are sorted in ascending order by primary key. The range of the data that you want to read is a left-closed, right-open interval. All rows whose primary key is greater than or equal to the start primary key and less than the end primary key are returned.

      For example, a table contains the pk1 and pk2 primary key columns. The pk1 column is of the String type and the pk2 column is of the Integer type.

      To export full data from the table, specify the following parameters:

      • Sample Range of Primary Key(begin) parameter

        [
          {
            "type": "INF_MIN"
          },
          {
            "type": "INF_MIN"
          }
        ]
      • Sample Range of Primary Key(end) parameter

        [
          {
            "type": "INF_MAX"
          },
          {
            "type": "INF_MAX"
          }
        ]

      To export the rows in which the value of the pk1 column is tablestore, specify the following parameters:

      • Sample Range of Primary Key(begin) parameter

        [
          {
            "type": "STRING",
            "value": "tablestore"
          },
          {
            "type": "INF_MIN"
          }
        ]
      • Sample Range of Primary Key(end) parameter

        [  
          {
            "type": "STRING",
            "value": "tablestore"
          },
          {
            "type": "INF_MAX"
          }
        ]

      Range of Primary Key(end)

      Split configuration information

      The custom rule used to split data. We recommend that you do not configure this parameter in common scenarios. If data is unevenly distributed in a Tablestore table and the automatic splitting feature of Tablestore Reader fails, you can specify a custom rule to split data. You can configure a split key within the range between the start primary key and the end primary key. You do not need to specify all primary keys. The value is a JSON array.

      Configure the destination data source

      Parameter

      Description

      Text type

      The format of the object that is written to OSS, such as .csv or .txt.

      Note

      Different object formats require different configurations. Select an object format and specify the specific parameters that are displayed.

      File name (including path)

      This parameter is displayed only when you set the Text type parameter to text, csv, or orc.

      The name of the object in OSS. The name can contain a path, such as tablestore/20231130/myotsdata.csv.

      path

      This parameter is displayed only when you set the Text type parameter to parquet.

      The path of the object in OSS, such as tablestore/20231130/.

      fileName

      This parameter is displayed only when you set the Text type parameter to parquet.

      The name of the object in OSS.

      Column Delimiter

      This parameter is displayed only when you set the Text type parameter to text or csv.

      The delimiter that is used to separate columns when data is written to the OSS object.

      Row Delimiter

      This parameter is displayed only when you set the Text type parameter to text.

      The custom delimiter that is used to separate rows. Example: \u0001. You must use a delimiter that does not exist in the data as the row delimiter.

      If you want to use the default row delimiter \n for Linux or \r\n for Windows, we recommend that you leave this parameter empty. The system automatically reads data based on the default row delimiter.

      Coding

      This parameter is displayed only when you set the Text type parameter to text or csv.

      The encoding format of the OSS object to which you write data.

      null value

      This parameter is displayed only when you set the Text type parameter to text, csv, or orc.

      The string that can be interpreted as null in the source data source. For example, if you set this parameter to null and the value of a field in the source data source is null, the system recognizes the field as null.

      Time format

      This parameter is displayed only when you set the Text type parameter to text or csv.

      The time format in which date data is written to the OSS object, such as yyyy-MM-dd.

      Prefix conflict

      The processing method that is used when the specified object name is the same as the name of an existing object in OSS. Valid values:

      • Replace: deletes the existing object and creates another object of the same name.

      • Retain: retains the existing object and creates another object whose name combines the existing object name and a random suffix.

      • Error: reports an error and stops the batch synchronization task.

      Split File

      This parameter is displayed only when you set the Text type parameter to text or csv.

      The maximum size of a single OSS object to which data is written. Unit: MB. The maximum size is 100 GB. When the size of an object that is written reaches the value of this parameter, the system generates a new object to continue writing data until all data is written.

      Write as Single File

      This parameter is displayed only when you set the Text type parameter to text or csv.

      Specifies whether to write a single object to OSS at a time. By default, multiple objects are written to OSS at a time. If no data is read during data writing, the following cases may occur: If an object header is configured, an empty object that contains only the object header is generated. If no object header is configured, a completely empty object is generated.

      If you want to write a single object to OSS at a time, select Write as Single File. In this case, if no data is read during data writing, no empty file is generated.

      First Row as Table Header

      This parameter is displayed only when you set the Text type parameter to text or csv.

      Specifies whether to write the first row as table headers when data is written to an object. By default, no table headers are generated. If you want to write the first row as table headers, select First Row as Table Header.

    2. In the Field Mapping section, click the image.png icon next to Source field and manually edit the source fields that you want to synchronize to OSS. You do not need to specify the destination fields.

    3. In the Channel Control section, configure the parameters for task execution, such as the Task Expected Maximum Concurrency, Synchronization rate, Policy for Dirty Data Records, and Distributed Execution parameters. For more information about the parameters, see Configure channel control policies.

    4. Click the image.png icon to save the configurations.

      Note

      If you do not save the configurations, a message prompting you to save the configurations appears when you perform subsequent operations. Click OK to save the configurations.

    Use the code editor

    To synchronize full data, you must use Tablestore Reader and OSS Writer. For more information about how to configure the task by using the code editor, see Tablestore data source and OSS data source.

    Important

    You cannot switch between the codeless UI and the code editor. Proceed with caution.

    1. In the Configure tasks step, click the image.png icon. In the message that appears, click OK.

    2. In the code editor, specify the parameters based on the following sample code.

      Important
      • In most cases, a task that exports full data is run only once. Therefore, you do not need to configure scheduling parameters for the task. For information about how to configure scheduling parameters, see Synchronize incremental data to OSS.

      • If the script configurations contain variables such as ${date}, set each variable to a specific value when you run the task to synchronize data.

      • Comments are provided in the sample code to help you understand the configurations. Delete all comments when you use the sample code.

      {
          "type": "job", // You cannot change the value of this parameter. 
          "version": "2.0", // The version number. You cannot change the value of this parameter. 
          "steps": [
              {
                  "stepType":"ots",// The name of the reader. You cannot change the value of this parameter. 
                  "parameter":{
                      "datasource":"otssource",// The name of the Tablestore data source. Specify this parameter based on your business requirements. 
                      "newVersion":"true",// Use Tablestore Reader of the latest version. 
                      "mode": "normal",// Read data in row mode. 
                      "isTimeseriesTable":"false",// Configure the table as a wide table rather than a time series table. 
                      "column":[// Required. The names of the columns you want to export to OSS. 
                          {
                              "name":"column1"// The name of the column that you want to export from the Tablestore data source to OSS. 
                          },
                          {
                              "name":"column2"
                          },
                          {
                              "name":"column3"
                          },
                          {
                              "name":"column4"
                          },
                          {
                              "name":"column5"
                          }
                      ],
                      "range":{
                          "split":[ // The partition configurations of the data table in Tablestore. You can specify this parameter to accelerate the export. In most cases, you do not need to specify this parameter. 
                          ],
                          "end":[// The information about the end primary key column in Tablestore. To export full data, set this parameter to INF_MAX. To export only part of data, specify this parameter based on your business requirements. If the data table contains multiple primary key columns, configure the information about the primary key columns for the end parameter. 
                              {
                                  "type":"STRING",
                                  "value":"endValue"
                              },
                              {
                                  "type":"INT",
                                  "value":"100"
                              },
                              {
                                  "type":"INF_MAX"
                              },
                              {
                                  "type":"INF_MAX"
                              }
                          ],
                          "begin":[// The information about the start primary key column in Tablestore. To export full data, set this parameter to INF_MIN. To export only part of data, specify this parameter based on your business requirements. If the data table contains multiple primary key columns, configure the information about the primary key columns for the begin parameter. 
                              {
                                  "type":"STRING",
                                  "value":"beginValue"
                              },
                              {
                                  "type":"INT",
                                  "value":"0"
                              },
                              {
                                  "type":"INF_MIN"
                              },
                              {
                                  "type":"INF_MIN"
                              }
                          ]
                      },
                      "table":"datatable"// The name of the data table in Tablestore. 
                  },
                  "name":"Reader",
                  "category":"reader"
              },
              {
                  "stepType": "oss",// The name of the writer. You cannot change the value of this parameter. 
                  "parameter": {
                      "nullFormat": "null", // The string used to identify the null field value. The value can be an empty string. 
                      "dateFormat": "yyyy-MM-dd HH:mm:ss",// The format of the time. 
                      "datasource": "osssource", // The name of the OSS data source. Specify this parameter based on your business requirements. 
                      "envType": 1,
                      "writeSingleObject": true, // Write a single object to OSS at a time. 
                      "writeMode": "truncate", // The operation to be performed by the system if an object that has the specified object name exists in the OSS data source. Valid values: truncate, append, and nonConflict. To export full data, set this parameter to truncate. A value of truncate specifies that the system clears the object in the OSS data source. A value of append specifies that the system appends the data to the object in the OSS data source. A value of nonConflict specifies that an error is reported. 
                      "encoding": "UTF-8", // The encoding format. 
                      "fieldDelimiter": ",", // The delimiter used to separate columns. 
                      "fileFormat": "csv", // The exported file format. Valid values: csv, text, parquet, and orc. 
                      "object": "tablestore/20231130/myotsdata.csv" // The prefix of the object name in the OSS data source. You do not need to include the bucket name in the prefix. Example: tablestore/20231130/. To perform scheduled export, you must use variables in the prefix such as tablestore/${date}. Then, specify the ${date} variable when you configure scheduling parameters. 
                  },
                  "name": "Writer",
                  "category": "writer"
              },
              {
                  "name": "Processor",
                  "stepType": null,
                  "category": "processor",
                  "copies": 1,
                  "parameter": {
                      "nodes": [],
                      "edges": [],
                      "groups": [],
                      "version": "2.0"
                  }
              }
          ],
          "setting": {
              "executeMode": null,
              "errorLimit": {
                  "record": "0" // When the number of errors exceeds the value of this parameter, the data fails to be imported to OSS. 
              },
              "speed": {
                  "concurrent": 2, // The concurrency. 
                  "throttle": false
              }
          },
          "order": {
              "hops": [
                  {
                      "from": "Reader",
                      "to": "Writer"
                  }
              ]
          }
      }
    3. Click the image.png icon to save the configurations.

      Note

      If you do not save the script, a message prompting you to save the script appears when you perform subsequent operations. Click OK to save the script.

  4. Run the synchronization task

    Important

    In most cases, you need to synchronize full data only once and do not need to configure scheduling properties.

    1. Click the 1680170333627-a1e19a43-4e2a-4340-9564-f53f2fa6806e icon.

    2. In the Parameters dialog box, select the name of the resource group from the drop-down list.

    3. Click Run.

      After the synchronization task is complete, click the URL of the run log on the Runtime Log tab to go to the details page of the run log. On the details page of the run log, check the value of the Current task status parameter.

      If the value of the Current task status parameter is FINISH, the task is complete.

Configure a task to synchronize data from a time series table

  1. Double-click the created batch synchronization node in the Data Integration folder.

  2. Establish network connections between the resource group and data sources.

    Select the source and destination data sources for the data synchronization task and the resource group that is used to run the data synchronization task. Establish network connections between the resource group and data sources and test the connectivity.

    Important

    Data synchronization tasks are run by using resource groups. Select a resource group and make sure that network connections between the resource group and data sources are established.

    1. In the Configure Network Connections and Resource Group step, set the Source parameter to Tablestore and the Data Source Name parameter to the name of the Tablestore data source that you added in Step 1: Add a Tablestore data source.

    2. Select a resource group.

      After you select a resource group, the system displays the region and specifications of the resource group and automatically tests the connectivity between the resource group and the source data source.

      Important

      Make sure that the resource group is the same as that you selected when you added the data source.

    3. Set the Destination parameter to OSS and the Data Source Name parameter to the name of the OSS data source that you added in Step 2: Add an OSS data source.

      The system automatically tests the connectivity between the resource group and the destination data source.

    4. After the network connectivity test is passed, click Next.

  3. Configure and save the batch synchronization task.

    You can configure a task to synchronize data from a time series table only by using the code editor. To synchronize full data, you must use Tablestore Reader and OSS Writer. For more information about how to configure the task by using the code editor, see Tablestore data source and OSS data source.

    Important

    You cannot switch between the codeless UI and the code editor. Proceed with caution.

    1. In the Configure tasks step, click the image.png icon. In the message that appears, click OK.

    2. In the code editor, specify the parameters based on the following sample code.

      Important
      • In most cases, a task that exports full data is run only once. Therefore, you do not need to configure scheduling parameters for the task. For information about how to configure scheduling parameters, see Synchronize incremental data to OSS.

      • If the script configurations contain variables such as ${date}, set each variable to a specific value when you run the task to synchronize data.

      • Comments are provided in the sample code to help you understand the configurations. Delete all comments when you use the sample code.

      {
          "type": "job",
          "version": "2.0", // The version number. You cannot change the value of this parameter. 
          "steps": [
              {
                  "stepType":"ots",// The name of the reader. You cannot change the value of this parameter. 
                  "parameter":{
                      "datasource":"otssource",// The name of the Tablestore data source. Specify this parameter based on your business requirements. 
                      "table": "timeseriestable",// The name of the time series table. 
                      // To read time series data, you must set the mode parameter to normal. 
                      "mode": "normal",
                      // To read time series data, you must set the newVersion parameter to true. 
                      "newVersion": "true",
                      // Configure the table as a time series table. 
                      "isTimeseriesTable":"true",
                      // Optional. Specify the name of the measurement from which you want to read time series data. If you do not specify this parameter, all data in the table is read. 
                      "measurementName":"measurement_1",
                      "column": [
                          // The name of the measurement column. Set the value to _m_name. If you do not need to export data in the column, you do not need to specify this parameter. 
                          { 
                              "name": "_m_name" 
                          },
                          // The name of the data source column. Set the value to _data_source. If you do not need to export data in the column, you do not need to specify this parameter. 
                          { 
                              "name": "_data_source" 
                          },
                          // The timestamp of the data point column. Unit: microseconds. Set the name parameter to _time and the type parameter to INT. If you do not need to export data in the column, you do not need to specify this parameter. 
                          {
                              // The name of the column. 
                              "name": "_time" 
                              // The type of the column. 
                              "type":"INT"
                          },
                          // The name of the time series tag column. If the time series data has multiple tags, you can configure multiple time series tag columns. 
                          {
                              // The name of the time series tag column. Specify this parameter based on your business requirements. 
                              "name": "tagA",
                              // Specify whether to configure the column as a time series tag column. Default value: false. If you want to configure the column as a time series tag column, set this parameter to true. 
                              "is_timeseries_tag":"true"
                          },
                          {
                              "name": "double_0",
                              "type":"DOUBLE"
                          },
                          {
                              "name": "string_0",
                              "type":"STRING"
                          },
                          {
                              "name": "long_0",
                              "type":"INT"
                          },
                          {
                              "name": "binary_0",
                              "type":"BINARY"
                          },
                          {
                              "name": "bool_0",
                              "type":"BOOL"
                          }
                      ]
                  },
                  "name":"Reader",
                  "category":"reader"
              },
              {
                  "stepType": "oss",// The name of the writer. You cannot change the value of this parameter. 
                  "parameter": {
                      "nullFormat": "null", // The string used to identify the null field value. The value can be an empty string. 
                      "dateFormat": "yyyy-MM-dd HH:mm:ss",// The format of the time. 
                      "datasource": "osssource", // The name of the OSS data source. Specify this parameter based on your business requirements. 
                      "envType": 1,
                      "writeSingleObject": true, // Write a single object to OSS at a time. 
                      "writeMode": "truncate", // The operation to be performed by the system if an object that has the specified object name exists in the OSS data source. Valid values: truncate, append, and nonConflict. To export full data, set this parameter to truncate. A value of truncate specifies that the system clears the object in the OSS data source. A value of append specifies that the system appends the data to the object in the OSS data source. A value of nonConflict specifies that an error is reported. 
                      "encoding": "UTF-8", // The encoding format. 
                      "fieldDelimiter": ",", // The delimiter used to separate columns. 
                      "fileFormat": "csv", // The exported file format. Valid values: csv, text, parquet, and orc. 
                      "object": "tablestore/20231130/myotsdata.csv" // The prefix of the object name in the OSS data source. You do not need to include the bucket name in the prefix. Example: tablestore/20231130/. To perform scheduled export, you must use variables in the prefix such as tablestore/${date}. Then, specify the ${date} variable when you configure scheduling parameters. 
                  },
                  "name": "Writer",
                  "category": "writer"
              },
              {
                  "name": "Processor",
                  "stepType": null,
                  "category": "processor",
                  "copies": 1,
                  "parameter": {
                      "nodes": [],
                      "edges": [],
                      "groups": [],
                      "version": "2.0"
                  }
              }
          ],
          "setting": {
              "executeMode": null,
              "errorLimit": {
                  "record": "0" // When the number of errors exceeds the value of this parameter, the data fails to be imported to OSS. 
              },
              "speed": {
                  "concurrent": 2, // The concurrency. 
                  "throttle": false
              }
          },
          "order": {
              "hops": [
                  {
                      "from": "Reader",
                      "to": "Writer"
                  }
              ]
          }
      }
    3. Click the image.png icon to save the configurations.

      Note

      If you do not save the script, a message prompting you to save the script appears when you perform subsequent operations. Click OK to save the script.

  4. Run the synchronization task

    Important

    In most cases, you need to synchronize full data only once and do not need to configure scheduling properties.

    1. Click the 1680170333627-a1e19a43-4e2a-4340-9564-f53f2fa6806e icon.

    2. In the Parameters dialog box, select the name of the resource group from the drop-down list.

    3. Click Run.

      After the synchronization task is complete, click the URL of the run log on the Runtime Log tab to go to the details page of the run log. On the details page of the run log, check the value of the Current task status parameter.

      If the value of the Current task status parameter is FINISH, the task is complete.

Step 5: View the data exported to OSS

  1. Log on to the OSS console.

  2. Click Buckets in the left-side navigation pane. On the Buckets page, find the bucket to which data is synchronized and click the name of the bucket.

  3. On the Objects page, select an object and download the object to check whether the data is synchronized as expected.

FAQ

References

  • After you export full data from Tablestore to OSS, you can synchronize incremental data in Tablestore to OSS. For more information, see Synchronize incremental data to OSS.

  • After you export full data from Tablestore to OSS, you can use the time-to-live (TTL) management feature to clear historical data that is no longer needed in Tablestore tables. For more information, see the Data versions and TTL topic or the "Appendix: Manage a time series table" section of the Use the TimeSeries model in the Tablestore console topic.

  • You can download the OSS object that contains the exported Tablestore data to your local device by using the OSS console or ossutil. For more information, see Simple download.

  • To prevent important data from being unavailable due to accidental deletion or malicious tampering, you can use Cloud Backup to back up data in wide tables of Tablestore instances on a regular basis and restore lost or damaged data at your earliest opportunity. For more information, see Overview.

  • If you want to implement tiered storage for the hot and cold data of Tablestore, full backup of Tablestore data, and large-scale real-time data analysis, you can use the data delivery feature of Tablestore. For more information, see Overview.