All Products
Search
Document Center

Tablestore:Operations on data

Last Updated:Sep 19, 2024

You can use the Tablestore CLI to insert, read, update, delete, scan, export, and import data.

Insert data

You can insert a row of data into a data table. Alternatively, you can import a JSON configuration file to insert a row of data into a data table.

Command syntax

put --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition

The following table describes the parameters in the command.

Parameter

Required

Example

Description

-k,--pk

Yes

["86", 6771]

The values of the primary key columns of the row. The value of this parameter is an array.

Important
  • The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.

  • If a primary key column is an auto-increment primary key column, you need to only set the value of the auto-increment primary key column to the placeholder null.

-a,--attr

Yes

[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "t":"string", "ts":15327798534}]

The attribute columns of the row. The value of this parameter is a JSON array. Each attribute column is specified by using the following fields:

  • c: required. The name of the attribute column.

  • v: required. The value of the attribute column.

  • t: optional. The type of the attribute column. Valid values: integer, string, binary, boolean, and double. If you set this field to string, the value of the attribute column is a string encoded in UTF-8. This field is required for an attribute column of the binary type.

  • ts: optional. The timestamp that is used as the version number of the data. The timestamp can be automatically generated or configured. If you do not configure this field, Tablestore automatically generates a timestamp. For more information, see Data versions and TTL.

--condition

No

ignore

The row existence condition of conditional update that is used to determine whether to insert a row of data. Default value: ignore. Valid values:

  • ignore: Data is inserted regardless of whether the row exists. If the row exists, existing data is overwritten by the inserted data.

  • exist: Data is inserted only when the row exists. Existing data is overwritten by the inserted data.

  • not_exist: Data is inserted only when the row does not exist.

For more information, see Conditional updates.

-i, --input

No

/temp/inputdata.json

The path of the JSON configuration file that is used to insert data.

You can also use a configuration file to insert data. The command syntax varies based on the operating system.

  • Windows

    put -i D:\\localpath\\filename.json
  • Linux and macOS

    put -i /localpath/filename.json

The following sample code provides an example of the content of a configuration file:

{
    "PK":{
        "Values":[
            "86",
            6771
        ]
    },
    "Attr":{
        "Values":[
            {
                "C":"age",
                "V":32,
                "TS":1626860801604,
                "IsInt":true
            }
        ]
    }
}

Examples

  • Example 1

    The following sample code provides an example on how to insert a row of data into a data table: The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type.

    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'
  • Example 2

    The following sample code provides an example on how to insert a row of data into a data table: The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type. In this example, data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.

    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'  --condition ignore
  • Example 3

    The following sample code provides an example on how to insert a row of data into a data table: The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type. The timestamp of data in the country column is 15327798534.

    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "t":"string", "ts":15327798534}]'
  • Example 4

    The following sample code provides an example on how to insert a row of data into a data table in which the second primary key column is an auto-increment primary key column. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is null. The row contains two attribute columns: name and country. The name and country columns are of the STRING type.

    put --pk '["86", null]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'

Read data

You can read data from a data table and export the data to a local JSON file.

Note

If the row that you want to read does not exist, an empty result is returned.

Command syntax

get --pk '[primaryKeyValue,primaryKeyValue]'

The following table describes the parameters in the command.

Parameter

Required

Example

Description

-k,--pk

Yes

["86",6771]

The values of the primary key columns of the row. The value of this parameter is an array.

Important

The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.

-c,--columns

No

name,uid

The columns that you want to read. You can specify the names of primary key columns or attribute columns. If you do not specify a column, all data in the row is returned.

--max_version

No

1

The maximum number of data versions that can be read.

--time_range_start

No

1626860469000

The version range of data that you want to read. The time_range_start parameter specifies the start timestamp and the time_range_end parameter specifies the end timestamp. The range includes the start value and excludes the end value.

--time_range_end

No

1626865270000

--time_range_specific

No

1626862870000

The specific version of data that you want to read.

-o, --output

No

/tmp/querydata.json

The local path of the JSON file to which the query results are exported.

Examples

The following sample code provides an example on how to read a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771.

get --pk '["86",6771]'

Update data

You can update a row of data in a data table. Alternatively, you can import a JSON configuration file to update a row of data in a data table.

Command syntax

update --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition

The following table describes the parameters in the command.

Parameter

Required

Example

Description

-k,--pk

Yes

["86", 6771]

The values of the primary key columns of the row. The value of this parameter is an array.

Important

The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.

--attr

Yes

[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":15327798534}]

The attribute columns of the row. The value of this parameter is a JSON array. Each attribute column is specified by using the following fields:

  • c: required. The name of the attribute column.

  • v: required. The value of the attribute column.

  • t: optional. The type of the attribute column. Valid values: integer, string, binary, boolean, and double. If you set this field to string, the value of the attribute column is a string encoded in UTF-8. This field is required for an attribute column of the binary type.

  • ts: optional. The timestamp that is used as the version number of the data. The timestamp can be automatically generated or configured. If you do not configure this field, Tablestore automatically generates a timestamp.

--condition

No

ignore

The row existence condition of conditional update that is used to determine whether to update a row of data. Default value: ignore. Valid values:

  • ignore: Data is inserted regardless of whether the row exists. If the row exists, existing data is overwritten by the inserted data.

  • exist: Data is inserted only when the row exists. Existing data is overwritten by the inserted data.

  • not_exist: Data is inserted only when the row does not exist.

For more information, see Conditional updates.

-i, --input

No

/tmp/inputdata.json

The path of the JSON configuration file that is used to update data.

You can also use the configuration file to update data. The command syntax varies based on the operating system.

  • Windows

    update -i D:\\localpath\\filename.json
  • Linux and macOS

    update -i /localpath/filename.json

The following sample code provides an example of the content of a configuration file:

{
    "PK":{
        "Values":[
            "86",
            6771
        ]
    },
    "Attr":{
        "Values":[
            {
                "C":"age",
                "V":32,
                "TS":1626860801604,
                "IsInt":true
            }
        ]
    }
}

Examples

The following sample code provides an example on how to update a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771. In this example, data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.

update --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'  --condition ignore

Delete data

You can delete a row of data with the specified primary key.

Command syntax

delete --pk '[primaryKeyValue,primaryKeyValue]'

The following table describes the parameters in the command.

Parameter

Required

Example

Description

-k,--pk

Yes

["86", 6771]

The values of the primary key columns of the row. The value of this parameter is an array.

Important

The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.

Examples

The following sample code provides an example on how to delete a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771.

delete --pk '["86", 6771]'

Scan data

You can scan a data table to obtain all data or a specified number of rows of data in the data table.

Command syntax

scan --limit limit

The following table describes the parameters in the command.

Parameter

Required

Example

Description

--limit

No

10

The maximum number of rows that you want to scan. If you do not configure this parameter, all data in the data table is scanned.

Examples

The following sample code provides an example on how to scan up to 10 rows of data in a data table.

scan --limit 10

Export data

You can export data from a data table to a local JSON file.

Command syntax

scan -o /localpath/filename.json -c attributeColumnName,attributeColumnName,attributeColumnName

The following table describes the parameters in the command.

Parameter

Required

Example

Description

-c, --columns

Yes

uid,name

The set of columns that you want to export. You can specify the names of primary key columns or attribute columns. If you do not specify a column name, all data in the row is exported.

--max_version

No

1

The maximum number of data versions that can be exported.

--time_range_start

No

1626865596000

The version range of data that you want to export. The time_range_start parameter specifies the start timestamp and the time_range_end parameter specifies the end timestamp. The range includes the start value and excludes the end value.

--time_range_end

No

1626869196000

--time_range_specific

No

1626867396000

The specific version of data that you want to export.

--backward

No

N/A

Specifies that the system sorts the exported data in descending order of primary key.

-o, --output

Yes

/tmp/mydata.json

The local path of the JSON file to which the query results are exported.

-l, --limit

No

10

The maximum number of rows that you want the query to return.

-b, --begin

No

'["86", 6771]'

The value range of data that you want to export. The range of the primary key is a left-closed, right-open interval.

Note
  • If the value of the --begin parameter is [null,null], the start value of the range of data that you want to export is [INF_MIN,INF_MIN]. This specifies that the start value in the first and second primary key columns is infinitely small.

  • If the value of the --end parameter is [null,null], the end value of the range of data that you want to export is [INF_MAX,INF_MAX]. This specifies that the end value in the first and second primary key columns is infinitely large.

-e, --end

No

'["86", 6775]'

Examples

  • Example 1

    The following sample code provides an example on how to export all data from the current table to the local file named mydata.json.

    scan -o /tmp/mydata.json
  • Example 2

    The following sample code provides an example on how to export the data in the uid and name columns of the current table to the mydata.json local file.

    scan -o /tmp/mydata.json -c uid,name
  • Example 3

    The following sample code provides an example on how to export data whose primary key value is within a specific range from the current table to the mydata.json local file. The value of the first primary key column in which the export starts is "86". The value of the second primary key column in which the export starts is 6771. The value of the first primary key column in which the export ends is "86". The value of the second primary key column in which the export ends is 6775.

    scan -o D:\\0testreport\\mydata.json -b '["86", 6771]' -e '["86", 6775]'
  • Example 4

    The following sample code provides an example on how to export data whose primary key value is within a specific range from the current table to the mydata.json local file. The value of the first primary key column in which the export starts is "50" and the value of the first primary key column in which the export ends is "100". The value range of the second primary key column is not specified.

    scan -o /tmp/mydata.json -b '["50",null]' -e '["100",null]'

Import data

You can import data from a local JSON file to a data table.

Important

If the path of the local JSON file contains Chinese characters, an error occurs when you import data.

Command syntax

import -i /localpath/filename.json --ignore_version

The following table describes the parameters in the command.

Parameter

Required

Example

Description

-a, --action

No

put

The mode in which data is imported. Default value: put. Valid values:

  • put: If the row exists, all versions of data in all columns of the existing row are deleted and a new row of data is written to the data table.

  • update: If the row exists, attribute columns can be added to or removed from the row, the specified version of data in an attribute column can be deleted, or the existing data in an attribute column can be updated. If the row does not exist, a new row of data is added.

-i, --input

Yes

/tmp/inputdata.json

The path of the local JSON file from which data is imported to the current data table.

--ignore_version

No

N/A

Ignores timestamp checks and uses the current time as the timestamp.

The following sample code provides an example of the content of a local configuration file:

{"PK":{"Values":["redchen",0]},"Attr":{"Values":[{"C":"country","V":"china0"},{"C":"name","V":"redchen0"}]}}
{"PK":{"Values":["redchen",1]},"Attr":{"Values":[{"C":"country","V":"china1"},{"C":"name","V":"redchen1"}]}}                              

Examples

  • Example 1

    The following sample code provides an example on how to import data from the mydata.json file to the current data table.

    import -i /tmp/mydata.json
  • Example 2

    The following sample code provides an example on how to import data from the mydata.json file to the current data table with the current time used as the timestamp.

    import -i /tmp/mydata.json --ignore_version