This topic describes how to import incremental data from Log Service to a Lindorm wide table in the Lindorm console.
Usage notes
This feature is no longer available for LTS instances that are purchased after June 16, 2023. If your LTS instance is purchased before June 16, 2023, you can still use this feature.
Prerequisites
You are logged on to Lindorm Tunnel Service (LTS) web UI. For more information, see Create a synchronization task.
A Lindorm SQL data source is added. For more information, see Add a Lindorm SQL data source.
A LogHub data source is added.
Supported destination table types
Tables that are created by executing Lindorm SQL statements are supported.
Procedure
Log on to the LTS web UI and choose Import Lindorm/HBase > SLS incremental Import.
On the SLS incremental Import page, click create new job.
Configure Tunnel Name, select the source cluster and destination cluster, and enter the table to be synchronized or migrated.
Click create. View the channel details after the channel is created.
Parameters
{
"reader": {
"columns": [
"__client_ip__",
"C_Source",
"id",
"name"
],
"consumerSize": 2, // This parameter specifies the number of consumers that subscribe to the LogHub data. The default value is 1.
"logstore": "LTS-test"
},
"writer": {
"columns": [
{
"name": "col1",
"value": "{{ concat('xx', name) }}" // This parameter supports expressions.
},
{
"name": "col2",
"value": "__client_ip__" // This parameter specifies the mapped column name.
},
{
{
"isPk": true , // This parameter specifies whether the columns are included in the primary key.
"name":"id",// You do not need to specify the column family for the primary key.
"value":"id"
}
}
]
"tableName": "default.sls"
}
}
The following simple Jtwig syntax is supported. For more information about Jtwig syntax, see Jtwig syntax.
{
"name": "hhh",
"value": "{{ concat(title, id) }}"
}