DataWorks provides SQL Server Reader and SQL Server Writer for you to read data from and write data to SQL Server data sources. This topic describes the capabilities of synchronizing data from or to SQL Server data sources.
Supported SQL Server versions
SQL Server Reader uses the driver com.microsoft.sqlserver sqljdbc4 4.0. For more information about the capabilities of the driver, see the official documentation. The following table lists the commonly used SQL Server versions and describes whether they are supported by the driver.
Version | Supported |
SQL Server 2016 | Yes |
SQL Server 2014 | Yes |
SQL Server 2012 | Yes |
PDW 2008R2 AU34 | Yes |
SQL Server 2008 R2 | Yes |
SQL Server 2008 | Yes |
SQL Server 2019 | No |
SQL Server 2018 | No |
Azure SQL Managed Instance | No |
Azure Synapse Analytics | No |
Azure SQL Database | Yes |
Limits
Data of views can be read during batch synchronization.
Data types
For information about all data types in each MySQL version, see the official SQL Server documentation. The following table provides the support status of main data types in SQL Server 2016.
Data type | SQL Server Reader | SQL Server Writer |
bigint | Supported | Supported |
bit | Supported | Supported |
decimal | Supported | Supported |
int | Supported | Supported |
money | Supported | Supported |
numeric | Supported | Supported |
smallint | Supported | Supported |
smallmoney | Supported | Supported |
tinyint | Supported | Supported |
float | Supported | Supported |
real | Supported | Supported |
date | Supported | Supported |
datetime2 | Supported | Supported |
datetime | Supported | Supported |
datetimeoffset | Not supported | Not supported |
smalldatetime | Supported | Supported |
time | Supported | Supported |
char | Supported | Supported |
text | Supported | Supported |
varchar | Supported | Supported |
nchar | Supported | Supported |
ntext | Supported | Supported |
nvarchar | Supported | Supported |
binary | Supported | Supported |
image | Supported | Supported |
varbinary | Supported | Supported |
cursor | Not supported | Not supported |
hierarchyid | Not supported | Not supported |
sql_variant | Supported | Supported |
Spatial Geometry Types | Not supported | Not supported |
table | Not supported | Not supported |
rowversion | Not supported | Not supported |
uniqueidentifier | Supported | Supported |
xml | Supported | Supported |
Spatial Geography Types | Not supported | Not supported |
The following table lists the data type mappings based on which SQL Server Reader and SQL Server Writer converts data types.
Category | SQL Server data type |
Integer | BIGINT, INT, SMALLINT, and TINYINT |
Floating point | FLOAT, DECIMAL, REAL, and NUMERIC |
String | CHAR, NCHAR, NTEXT, NVARCHAR, TEXT, VARCHAR, NVARCHAR (MAX), and VARCHAR (MAX) |
Date and time | DATE, DATETIME, and TIME |
Boolean | BIT |
Binary | BINARY, VARBINARY, VARBINARY (MAX), and TIMESTAMP |
Develop a data synchronization task
For information about the entry point for and the procedure of configuring a data synchronization task, see the following sections. For information about the parameter settings, view the infotip of each parameter on the configuration tab of the task.
Add a data source
Before you configure a data synchronization node to synchronize data from or to a specific data source, you must add the data source to DataWorks. For more information, see Add and manage data sources.
Configure a batch synchronization task to synchronize data of a single table
For more information about the configuration procedure, see Configure a batch synchronization task by using the codeless UI and Configure a batch synchronization task by using the code editor.
For information about all parameters that are configured and the code that is run when you use the code editor to configure a batch synchronization task, see Appendix: Code and parameters.
Configure synchronization settings to implement batch synchronization of all data in a database
For more information about the configuration procedure, see Configure a synchronization task in Data Integration.
Additional information
Data synchronization between primary and secondary databases
A secondary SQL Server database can be deployed for disaster recovery. The secondary database continuously synchronizes data from the primary database based on binary logs. Data latency between the primary and secondary databases cannot be prevented. This may result in data inconsistency.
Data consistency control
SQL Server is a relational database management system (RDBMS) that supports strong consistency for data queries. A database snapshot is created before a synchronization task starts. SQL Server Reader reads data from the database snapshot. Therefore, if new data is written to the database during data synchronization, SQL Server Reader cannot obtain the new data.
Data consistency cannot be ensured if you enable SQL Server Reader to use parallel threads to read data in a synchronization task.
SQL Server Reader shards the source table based on the value of the splitPk parameter and uses parallel threads to read data. These parallel threads belong to different transactions and read data at different points in time. Therefore, the parallel threads observe different snapshots.
Data inconsistencies cannot be prevented if parallel threads are used for a synchronization task. The following workarounds can be used:
Enable SQL Server Reader to use a single thread to read data in a synchronization task. This indicates that you do not need to specify a shard key for SQL Server Reader. This way, data consistency is ensured, but data is synchronized at low efficiency.
Make sure that no data is written to the source table during data synchronization. This ensures that the data in the source table remains unchanged during data synchronization. For example, you can lock the source table or disable data synchronization between primary and secondary databases. This way, data is efficiently synchronized, but your ongoing services may be interrupted.
Character encoding
SQL Server Reader uses JDBC to read data. This enables SQL Server Reader to automatically convert the encoding formats of characters. Therefore, you do not need to specify the encoding format.
Incremental data synchronization
SQL Server Reader uses JDBC to connect to a database and uses a
SELECT
statement with a WHERE clause to read incremental data.For batch data, incremental add, update, and delete operations (including logically delete operations) are distinguished by timestamps. Specify the WHERE clause based on a specific timestamp. The time indicated by the timestamp must be later than the time indicated by the latest timestamp in the previous synchronization.
For streaming data, specify the WHERE clause based on the ID of a specific record. The ID must be greater than the maximum ID involved in the previous synchronization.
If the data that is added or modified cannot be distinguished, SQL Server Reader can read only full data.
Syntax validation
SQL Server Reader allows you to specify custom SELECT statements by using the querySql parameter but does not verify the syntax of these statements.
Appendix: Code and parameters
Appendix: Configure a batch synchronization task by using the code editor
If you use the code editor to configure a batch synchronization node, you must configure parameters for the reader and writer of the related data source based on the format requirements in the code editor. For more information about the format requirements, see Configure a batch synchronization node by using the code editor. The following information describes the configuration details of parameters for the reader and writer in the code editor.
Code for SQL Server Reader
{
"type":"job",
"version":"2.0",// The version number.
"steps":[
{
"stepType":"sqlserver",// The plug-in name.
"parameter":{
"datasource":"",// The name of the data source.
"column":[// The names of the columns.
"id",
"name"
],
"where":"",// The WHERE clause.
"splitPk":"",// The shard key based on which the table is sharded.
"table":""// The name of the table.
},
"name":"Reader",
"category":"reader"
},
{
"stepType":"stream",
"parameter":{},
"name":"Writer",
"category":"writer"
}
],
"setting":{
"errorLimit":{
"record":"0"// The maximum number of dirty data records allowed.
},
"speed":{
"throttle":true,// Specifies whether to enable throttling. The value false indicates that throttling is disabled, and the value true indicates that throttling is enabled. The mbps parameter takes effect only when the throttle parameter is set to true.
"concurrent":1 // The maximum number of parallel threads.
"mbps":"12",// The maximum transmission rate. Unit: MB/s.
}
},
"order":{
"hops":[
{
"from":"Reader",
"to":"Writer"
}
]
}
}
You can use the querySql parameter to specify an SQL statement to read data. The following code provides an example. In the following code, sql_server_source is the SQL Server data source, dbo.test_table is the table from which you want to read data, and name is the column from which you want to read data.
{
"stepType": "sqlserver",
"parameter": {
"querySql": "select name from dbo.test_table",
"datasource": "sql_server_source",
"column": [
"name"
],
"where": "",
"splitPk": "id"
},
"name": "Reader",
"category": "reader"
},
Parameters in code for SQL Server Reader
Parameter | Description | Required | Default value |
datasource | The name of the data source. It must be the same as the name of the added data source. You can add data sources by using the code editor. | Yes | No default value |
table | The name of the table from which you want to read data. Each synchronization task can be used to synchronize data to only one table. | Yes | No default value |
column | The names of the columns from which you want to read data. Specify the names in a JSON array. The default value is [ * ], which indicates all the columns in the source table.
| Yes | No default value |
splitPk | The field that is used for data sharding when SQL Server Reader reads data. If you configure this parameter, the source table is sharded based on the value of this parameter. Data Integration then runs parallel threads to read data. This way, data can be synchronized more efficiently.
| No | No default value |
where | The WHERE clause. SQL Server Reader generates an SQL statement based on the settings of the column, table, and where parameters and uses the generated statement to read data. For example, when you perform a test, you can set the where parameter to limit 10. To read the data that is generated on the current day, you can set the where parameter to
| No | No default value |
querySql | The SQL statement that is used for refined data filtering. Configure this parameter in the format of | No | No default value |
fetchSize | The number of data records to read at a time. This parameter determines the number of interactions between Data Integration and the source database and affects read efficiency. Note If you set this parameter to a value greater than 2048, an out of memory (OOM) error may occur during data synchronization. | No | 1024 |
SQL Server Reader generates an SQL statement based on the settings of the table, column, and where parameters and sends the generated statement to the SQL Server database.
If you configure the querySql parameter, SQL Server Reader sends the value of this parameter to the SQL Server database.
Code for SQL Server Writer
{
"type":"job",
"version":"2.0",// The version number.
"steps":[
{
"stepType":"stream",
"parameter":{},
"name":"Reader",
"category":"reader"
},
{
"stepType":"sqlserver",// The plug-in name.
"parameter":{
"postSql":[],// The SQL statement that you want to execute after the synchronization task is run.
"datasource":"",// The name of the data source.
"column":[// The names of the columns.
"id",
"name"
],
"table":"",// The name of the table.
"preSql":[]// The SQL statement that you want to execute before the synchronization task is run.
},
"name":"Writer",
"category":"writer"
}
],
"setting":{
"errorLimit":{
"record":"0"// The maximum number of dirty data records allowed.
},
"speed":{
"throttle":true,// Specifies whether to enable throttling. The value false indicates that throttling is disabled, and the value true indicates that throttling is enabled. The mbps parameter takes effect only when the throttle parameter is set to true.
"concurrent":1, // The maximum number of parallel threads.
"mbps":"12"// The maximum transmission rate. Unit: MB/s.
}
},
"order":{
"hops":[
{
"from":"Reader",
"to":"Writer"
}
]
}
}
Parameters in code for SQL Server Writer
Parameter | Description | Required | Default value |
datasource | The name of the data source. It must be the same as the name of the added data source. You can add data sources by using the code editor. | Yes | No default value |
table | The name of the table to which you want to write data. | Yes | No default value |
column | The names of the columns to which you want to write data. Separate the names with commas (,), such as | Yes | No default value |
preSql | The SQL statement that you want to execute before the synchronization task is run. For example, you can set this parameter to the SQL statement that is used to delete outdated data. You can execute only one SQL statement on the codeless UI and multiple SQL statements in the code editor. | No | No default value |
postSql | The SQL statement that you want to execute after the synchronization task is run. For example, you can set this parameter to the SQL statement that is used to add a timestamp. You can execute only one SQL statement on the codeless UI and multiple SQL statements in the code editor. | No | No default value |
writeMode | The write mode. Valid value: insert. If a primary key conflict or unique index conflict occurs, Data Integration considers data as dirty data and retains the original data. | No | insert |
batchSize | The number of data records to write at a time. Set this parameter to an appropriate value based on your business requirements. This greatly reduces the interactions between Data Integration and SQL Server and increases throughput. If you set this parameter to an excessively large value, an OOM error may occur during data synchronization. | No | 1,024 |