You can use the Tablestore CLI to insert, read, update, delete, scan, export, and import data.
Insert data
You can insert a row of data into a data table. Alternatively, you can import a JSON configuration file to insert a row of data into a data table.
Command syntax
put --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition
The following table describes the parameters in the command.
Parameter | Required | Example | Description |
-k,--pk | Yes | ["86", 6771] | The values of the primary key columns of the row. The value of this parameter is an array. Important
|
-a,--attr | Yes | [{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "t":"string", "ts":15327798534}] | The attribute columns of the row. The value of this parameter is a JSON array. Each attribute column is specified by using the following fields:
|
--condition | No | ignore | The row existence condition of conditional update that is used to determine whether to insert a row of data. Default value: ignore. Valid values:
For more information, see Conditional updates. |
-i, --input | No | /temp/inputdata.json | The path of the JSON configuration file that is used to insert data. |
You can also use a configuration file to insert data. The command syntax varies based on the operating system.
Windows
put -i D:\\localpath\\filename.json
Linux and macOS
put -i /localpath/filename.json
The following sample code provides an example of the content of a configuration file:
{
"PK":{
"Values":[
"86",
6771
]
},
"Attr":{
"Values":[
{
"C":"age",
"V":32,
"TS":1626860801604,
"IsInt":true
}
]
}
}
Examples
Example 1
The following sample code provides an example on how to insert a row of data into a data table: The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type.
put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'
Example 2
The following sample code provides an example on how to insert a row of data into a data table: The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type. In this example, data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.
put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]' --condition ignore
Example 3
The following sample code provides an example on how to insert a row of data into a data table: The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type. The timestamp of data in the country column is 15327798534.
put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "t":"string", "ts":15327798534}]'
Example 4
The following sample code provides an example on how to insert a row of data into a data table in which the second primary key column is an auto-increment primary key column. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is null. The row contains two attribute columns: name and country. The name and country columns are of the STRING type.
put --pk '["86", null]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'
Read data
You can read data from a data table and export the data to a local JSON file.
If the row that you want to read does not exist, an empty result is returned.
Command syntax
get --pk '[primaryKeyValue,primaryKeyValue]'
The following table describes the parameters in the command.
Parameter | Required | Example | Description |
-k,--pk | Yes | ["86",6771] | The values of the primary key columns of the row. The value of this parameter is an array. Important The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table. |
-c,--columns | No | name,uid | The columns that you want to read. You can specify the names of primary key columns or attribute columns. If you do not specify a column, all data in the row is returned. |
--max_version | No | 1 | The maximum number of data versions that can be read. |
--time_range_start | No | 1626860469000 | The version range of data that you want to read. The time_range_start parameter specifies the start timestamp and the time_range_end parameter specifies the end timestamp. The range includes the start value and excludes the end value. |
--time_range_end | No | 1626865270000 | |
--time_range_specific | No | 1626862870000 | The specific version of data that you want to read. |
-o, --output | No | /tmp/querydata.json | The local path of the JSON file to which the query results are exported. |
Examples
The following sample code provides an example on how to read a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771.
get --pk '["86",6771]'
Update data
You can update a row of data in a data table. Alternatively, you can import a JSON configuration file to update a row of data in a data table.
Command syntax
update --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition
The following table describes the parameters in the command.
Parameter | Required | Example | Description |
-k,--pk | Yes | ["86", 6771] | The values of the primary key columns of the row. The value of this parameter is an array. Important The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table. |
--attr | Yes | [{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":15327798534}] | The attribute columns of the row. The value of this parameter is a JSON array. Each attribute column is specified by using the following fields:
|
--condition | No | ignore | The row existence condition of conditional update that is used to determine whether to update a row of data. Default value: ignore. Valid values:
For more information, see Conditional updates. |
-i, --input | No | /tmp/inputdata.json | The path of the JSON configuration file that is used to update data. |
You can also use the configuration file to update data. The command syntax varies based on the operating system.
Windows
update -i D:\\localpath\\filename.json
Linux and macOS
update -i /localpath/filename.json
The following sample code provides an example of the content of a configuration file:
{
"PK":{
"Values":[
"86",
6771
]
},
"Attr":{
"Values":[
{
"C":"age",
"V":32,
"TS":1626860801604,
"IsInt":true
}
]
}
}
Examples
The following sample code provides an example on how to update a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771. In this example, data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.
update --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]' --condition ignore
Delete data
You can delete a row of data with the specified primary key.
Command syntax
delete --pk '[primaryKeyValue,primaryKeyValue]'
The following table describes the parameters in the command.
Parameter | Required | Example | Description |
-k,--pk | Yes | ["86", 6771] | The values of the primary key columns of the row. The value of this parameter is an array. Important The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table. |
Examples
The following sample code provides an example on how to delete a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771.
delete --pk '["86", 6771]'
Scan data
You can scan a data table to obtain all data or a specified number of rows of data in the data table.
Command syntax
scan --limit limit
The following table describes the parameters in the command.
Parameter | Required | Example | Description |
--limit | No | 10 | The maximum number of rows that you want to scan. If you do not configure this parameter, all data in the data table is scanned. |
Examples
The following sample code provides an example on how to scan up to 10 rows of data in a data table.
scan --limit 10
Export data
You can export data from a data table to a local JSON file.
Command syntax
scan -o /localpath/filename.json -c attributeColumnName,attributeColumnName,attributeColumnName
The following table describes the parameters in the command.
Parameter | Required | Example | Description |
-c, --columns | Yes | uid,name | The set of columns that you want to export. You can specify the names of primary key columns or attribute columns. If you do not specify a column name, all data in the row is exported. |
--max_version | No | 1 | The maximum number of data versions that can be exported. |
--time_range_start | No | 1626865596000 | The version range of data that you want to export. The time_range_start parameter specifies the start timestamp and the time_range_end parameter specifies the end timestamp. The range includes the start value and excludes the end value. |
--time_range_end | No | 1626869196000 | |
--time_range_specific | No | 1626867396000 | The specific version of data that you want to export. |
--backward | No | N/A | Specifies that the system sorts the exported data in descending order of primary key. |
-o, --output | Yes | /tmp/mydata.json | The local path of the JSON file to which the query results are exported. |
-l, --limit | No | 10 | The maximum number of rows that you want the query to return. |
-b, --begin | No | '["86", 6771]' | The value range of data that you want to export. The range of the primary key is a left-closed, right-open interval. Note
|
-e, --end | No | '["86", 6775]' |
Examples
Example 1
The following sample code provides an example on how to export all data from the current table to the local file named mydata.json.
scan -o /tmp/mydata.json
Example 2
The following sample code provides an example on how to export the data in the uid and name columns of the current table to the mydata.json local file.
scan -o /tmp/mydata.json -c uid,name
Example 3
The following sample code provides an example on how to export data whose primary key value is within a specific range from the current table to the mydata.json local file. The value of the first primary key column in which the export starts is "86". The value of the second primary key column in which the export starts is 6771. The value of the first primary key column in which the export ends is "86". The value of the second primary key column in which the export ends is 6775.
scan -o D:\\0testreport\\mydata.json -b '["86", 6771]' -e '["86", 6775]'
Example 4
The following sample code provides an example on how to export data whose primary key value is within a specific range from the current table to the mydata.json local file. The value of the first primary key column in which the export starts is "50" and the value of the first primary key column in which the export ends is "100". The value range of the second primary key column is not specified.
scan -o /tmp/mydata.json -b '["50",null]' -e '["100",null]'
Import data
You can import data from a local JSON file to a data table.
If the path of the local JSON file contains Chinese characters, an error occurs when you import data.
Command syntax
import -i /localpath/filename.json --ignore_version
The following table describes the parameters in the command.
Parameter | Required | Example | Description |
-a, --action | No | put | The mode in which data is imported. Default value: put. Valid values:
|
-i, --input | Yes | /tmp/inputdata.json | The path of the local JSON file from which data is imported to the current data table. |
--ignore_version | No | N/A | Ignores timestamp checks and uses the current time as the timestamp. |
The following sample code provides an example of the content of a local configuration file:
{"PK":{"Values":["redchen",0]},"Attr":{"Values":[{"C":"country","V":"china0"},{"C":"name","V":"redchen0"}]}}
{"PK":{"Values":["redchen",1]},"Attr":{"Values":[{"C":"country","V":"china1"},{"C":"name","V":"redchen1"}]}}
Examples
Example 1
The following sample code provides an example on how to import data from the mydata.json file to the current data table.
import -i /tmp/mydata.json
Example 2
The following sample code provides an example on how to import data from the mydata.json file to the current data table with the current time used as the timestamp.
import -i /tmp/mydata.json --ignore_version