If you want to consume data in a data table in real time, you can call the CreateTunnel operation to create a tunnel for the table. You can create multiple tunnels for a data table. When you create a tunnel, you must specify the data table name, tunnel name, and tunnel type.
Prerequisites
A TunnelClient instance is initialized.
A data table is created. For more information, see Create a data table.
Parameters
Request parameters
Parameter | Description |
TableName | The name of the data table for which you want to create a tunnel. |
TunnelName | The name of the tunnel. |
Type | The type of the tunnel. Valid values: BaseData, Stream, and BaseAndStream. The BaseData type specifies that the tunnel is used to consume full data of the data table. The Stream type specifies that the tunnel is used to consume incremental data of the data table. The BaseAndStream type specifies that the tunnel is used to consume differential data of the data table. If you set this parameter to Stream or BaseAndStream, the system considers the data that is written to the data table after the tunnel is created incremental. If you want to consume incremental data generated after a specific point of time, you must configure the startTime parameter for the incremental data.
|
Response parameters
Parameter | Description |
TunnelId | The ID of the tunnel. |
ResponseInfo | Other fields returned in the response, including the RequestId field in the request. RequestId uniquely identifies the request. |
Examples
The following sample code provides an example on how to create a tunnel of the BaseAndStream type:
req := &tunnel.CreateTunnelRequest{
TableName: "<TABLE_NAME>",
TunnelName: "<TUNNEL_NAME>",
Type: tunnel.TunnelTypeBaseStream, // Create a tunnel of the BaseAndStream type.
}
resp, err := tunnelClient.CreateTunnel(req)
if err != nil {
log.Fatal("create tunnel failed ", err)
}
log.Println("tunnel id is", resp.TunnelId)
FAQ
References
For information about the API operation that you can call to create a tunnel, see CreateTunnel.
If you want to quickly use Tunnel Service to consume data, see Getting started.
You can query information about all tunnels of a table. For more information, see Query information about all tunnels of a data table.
You can query information about a tunnel. For more information, see Query information about a tunnel.
You can delete a tunnel that you no longer require. For more information, see Delete a tunnel.
You can use Tunnel Service to migrate data. For more information, see Synchronize data from one table to another table in Tablestore.
Realtime Compute for Apache Flink can use the tunnels of Tunnel Service as the source of streaming data to compute and analyze Tablestore data. For more information, see Tutorial for the Wide Column model and Tutorial (TimeSeries model).
You are not charged for Tunnel Service. However, you are charged for the read throughput that is generated when you use tunnels to consume data. For more information, see Billing overview.