DataWorks provides SAP HANA Reader and SAP HANA Writer for you to read data from and write data to SAP HANA data sources. This topic describes the capabilities of synchronizing data from or to SAP HANA data sources.
Supported versions
The following SAP HANA versions are supported:
Data type mappings
The following table lists the data type mappings based on which SAP HANA Reader converts data types.
Category | SAP HANA data type |
Category | SAP HANA data type |
Integer | INT, TINYINT, SMALLINT, MEDIUMINT, and BIGINT |
Floating point | FLOAT, DOUBLE, and DECIMAL |
String | VARCHAR, CHAR, TINYTEXT, TEXT, MEDIUMTEXT, and LONGTEXT |
Date and time | DATE, DATETIME, TIMESTAMP, TIME, and YEAR |
Boolean | BIT and BOOLEAN |
Binary | TINYBLOB, MEDIUMBLOB, BLOB, LONGBLOB, and VARBINARY |
Add a data source
Before you develop a synchronization task in DataWorks, you must add the required data source to DataWorks by following the instructions in Add and manage data sources. You can view the infotips of parameters in the DataWorks console to understand the meanings of the parameters when you add a data source.
Develop a data synchronization task
For information about the entry point for and the procedure of configuring a synchronization task, see the following configuration guides.
Configure a batch synchronization task to synchronize data of a single table
Appendix: Code and parameters
Configure a batch synchronization task by using the code editor
If you want to configure a batch synchronization task by using the code editor, you must configure the related parameters in the script based on the unified script format requirements. For more information, see Configure a batch synchronization task by using the code editor. The following information describes the parameters that you must configure for data sources when you configure a batch synchronization task by using the code editor.
Parameters in code for SAP HANA Reader
Parameter | Description |
username | The username that is used to log on to the SAP HANA database. |
password | The password that is used to log on to the SAP HANA database. |
column | The names of the columns from which you want to read data. To read data from all the columns in the source table, set this parameter to an asterisk (*). Note If a column name that you want to specify contains forward slashes (/), you must specify the column name in the format of \"your_column_name\" for escaping. For example, if the column name is /abc/efg, it must be specified as \"/abc/efg\". |
table | The name of the table from which you want to read data. |
jdbcUrl | The JDBC URL that is used to connect to SAP HANA. Example: jdbc:sap://127.0.0.1:30215?currentschema=TEST. |
splitPk | The field that is used for data sharding when SAP HANA Reader reads data. If you configure this parameter, the source table is sharded based on the value of this parameter. Data Integration then runs parallel threads to read data. You can specify a field of an integer data type for the splitPk parameter. If the source table does not contain fields of integer data types, you can leave this parameter empty. |
Code for SAP HANA Writer
In the following code, a synchronization task is configured to write data to an SAP HANA database:
{
"type":"job",
"version":"2.0",
"steps":[
{
"stepType":"stream",
"parameter":{},
"name":"Reader",
"category":"reader"
},
{
"stepType":"saphana",
"parameter":{
"postSql":[],
"datasource":"",
"column":[
"id",
"value"
],
"batchSize":1024,
"table":"",
"preSql":[
"delete from XXX;"
]
},
"name":"Writer",
"category":"writer"
}
],
"setting":{
"errorLimit":{
"record":"0"
},
"speed":{
"throttle":true,
"concurrent":1,
"mbps":"12"
}
},
"order":{
"hops":[
{
"from":"Reader",
"to":"Writer"
}
]
}
}
Parameters in code for SAP HANA Writer
Parameter | Description | Required | Default value |
datasource | The name of the data source. It must be the same as the name of the added data source. You can add data sources by using the code editor. | Yes | No default value |
table | The name of the table to which you want to write data. | Yes | No default value |
column | The names of the columns to which you want to write data. Separate the names with commas (,), such as "column": ["id", "name", "age"] . If you want to write data to all the columns in the destination table, set this parameter to an asterisk (*), such as "column":["*"] . Note If a source column name that you specify contains forward slashes (/), you must escape the column name in the format of \"your_column_name\" by using backslashes (\). For example, if the column name is /abc/efg, it must be escaped as \"/abc/efg\". | Yes | No default value |
preSql | The SQL statement that you want to execute before the batch synchronization task is run. You can execute only one SQL statement on the codeless UI and multiple SQL statements in the code editor. For example, you can set this parameter to the following SQL statement that is used to delete outdated data:
Note If you specify multiple SQL statements, whether all the statements can be successfully executed cannot be ensured. | No | No default value |
postSql | The SQL statement that you want to execute after the synchronization task is run. You can execute only one SQL statement on the codeless UI and multiple SQL statements in the code editor. For example, you can set this parameter to the alter table tablenameadd colname timestamp DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP SQL statement that is used to add a timestamp. | No | No default value |
batchSize | The number of data records to write at a time. Set this parameter to an appropriate value based on your business requirements. This greatly reduces the interactions between Data Integration and SAP HANA and increases throughput. If you set this parameter to an excessively large value, an OOM error may occur during data synchronization. | No | 1024 |