All Products
Search
Document Center

DataWorks:AnalyticDB for MySQL 3.0 data source

最終更新日:Dec 25, 2023

DataWorks provides AnalyticDB for MySQL 3.0 Reader and AnalyticDB for MySQL 3.0 Writer for you to read data from and write data to AnalyticDB for MySQL 3.0 data sources. This topic describes the capabilities of synchronizing data from or to AnalyticDB for MySQL 3.0 data sources.

Limits

  • The AnalyticDB for MySQL 3.0 data source of the data lakehouse edition cannot be connected to shared resource groups, and cannot be used for a synchronization task.

  • If you switch from using the AnalyticDB for MySQL 3.0 instance of the data warehouse edition to using the same instance of the data lakehouse edition, the synchronization task for which the AnalyticDB for MySQL 3.0 instance is used and that is run on shared resource groups may fail to run. Therefore, we recommend that you check whether the synchronization task for which the AnalyticDB for MySQL 3.0 instance is used is run on shared resource groups. If such a synchronization task exists, replace the shared resource groups with exclusive resource groups to run the task.

  • Data of views can be read during batch synchronization.

Data type mappings

Batch data read

The following table lists the data type mappings based on which AnalyticDB for MySQL 3.0 Reader converts data types.

Category

AnalyticDB for MySQL 3.0 data type

Integer

INT, INTEGER, TINYINT, SMALLINT, and BIGINT

Floating point

FLOAT, DOUBLE, and DECIMAL

String

VARCHAR

Date and time

DATE, DATETIME, TIMESTAMP, and TIME

Boolean

BOOLEAN

Batch data write

The following table lists the data type mappings based on which AnalyticDB for MySQL 3.0 Writer converts data types.

Category

AnalyticDB for MySQL 3.0 data type

Integer

INT, INTEGER, TINYINT, SMALLINT, and BIGINT

Floating point

FLOAT, DOUBLE, and DECIMAL

String

VARCHAR

Date and time

DATE, DATETIME, TIMESTAMP, and TIME

Boolean

BOOLEAN

Develop a data synchronization task

For information about the entry point for and the procedure of configuring a data synchronization task, see the following sections. For information about the parameter settings, view the infotip of each parameter on the configuration tab of the task.

Add a data source

Before you configure a data synchronization task to synchronize data from or to a specific data source, you must add the data source to DataWorks. For more information, see Add and manage data sources.

Configure a batch synchronization task to synchronize data of a single table

Configure a real-time synchronization task to synchronize data of a single table or a database

For more information about the configuration procedure, see Configure a real-time synchronization task in DataStudio.

Configure synchronization settings to implement batch synchronization of all data in a database or real-time synchronization of full and incremental data in a single table or a database

For more information about the configuration procedure, see Configure a synchronization task in Data Integration.

Appendix: Code and parameters

Appendix: Configure a batch synchronization task by using the code editor

If you use the code editor to configure a batch synchronization task, you must configure parameters for the reader and writer of the related data source based on the format requirements in the code editor. For more information about the format requirements, see Configure a batch synchronization task by using the code editor. The following information describes the configuration details of parameters for the reader and writer in the code editor.

Code for AnalyticDB for MySQL 3.0 Reader

{
"type": "job",
"steps": [
{ 
"stepType": "analyticdb_for_mysql", // The plug-in name. 
"parameter": {
"column": [ // The names of the columns. 
"id",
"value",
"table"
],
"connection": [
{
"datasource": "xxx", // The name of the data source. 
"table": [ // The name of the table. 
"xxx"
]
}
],
"where": "", // The WHERE clause. 
"splitPk": "", // The shard key. 
"encoding": "UTF-8" // The encoding format. 
},
"name": "Reader",
"category": "reader"
},
{ 
"stepType": "stream",
"parameter": {},
"name": "Writer",
"category": "writer"
}
],
"version": "2.0",
"order": {
"hops": [
{
"from": "Reader",
"to": "Writer"
}
]
},
"setting": {
"errorLimit": {
"record": "0" // The maximum number of dirty data records allowed. 
},
"speed": {
"throttle":true,// Specifies whether to enable throttling. The value false indicates that throttling is disabled, and the value true indicates that throttling is enabled. The mbps parameter takes effect only when the throttle parameter is set to true. 
                        "concurrent":1 // The maximum number of parallel threads. 
                      "mbps":"12"// The maximum transmission rate. Unit: MB/s. 
}
}
}

Parameters in code for AnalyticDB for MySQL 3.0 Reader

Parameter

Description

Required

Default value

datasource

The name of the data source. It must be the same as the name of the added data source. You can add data sources by using the code editor.

Yes

No default value

table

The name of the table from which you want to read data.

Yes

No default value

column

The names of the columns from which you want to read data. The columns are specified in a JSON array. The default value is [*], which indicates all columns.

  • You can select specific columns to read.

  • The column order can be changed. You can read data from the specified columns in an order different from that specified in the schema of the table.

  • Constants are supported. The column names must be arranged in compliance with the SQL syntax supported by MySQL, such as ["id", "`table`", "1", "'bazhen.csy'", "null", "to_char(a + 1)", "2.3" , "true"].

    • id: a column name.

    • table: the name of a column that contains reserved keywords.

    • 1: an integer constant.

    • bazhen.csy: a string constant.

    • null: a null pointer.

    • to_char(a + 1): a function expression that is used to calculate the length of a string.

    • 2.3: a floating-point constant.

    • true: a Boolean value.

  • The column parameter must explicitly specify all the columns from which you want to read data. The parameter cannot be left empty.

Yes

No default value

splitPk

The field that is used for data sharding when AnalyticDB for MySQL 3.0 Reader reads data. If you specify this parameter, the source table is sharded based on the value of this parameter. Data Integration then runs parallel threads to read data. This way, data can be synchronized more efficiently.

  • We recommend that you set the splitPk parameter to the name of a primary key column of the table. Data can be evenly distributed to different shards based on the primary key column, instead of being intensively distributed only to specific shards.

  • The splitPk parameter supports sharding for data only of integer data types. If you set the splitPk parameter to a field of an unsupported data type, such as a string, floating point, or date data type, the setting of this parameter is ignored, and a single thread is used to read data.

  • If the splitPk parameter is not provided or is left empty, a single thread is used to read data.

No

No default value

where

The WHERE clause. For example, you can set this parameter to gmt_create > $bizdate to read the data that is generated on the current day.

  • You can use the WHERE clause to read incremental data. If the where parameter is not provided or is left empty, AnalyticDB for MySQL 3.0 Reader reads all data.

  • Do not set the where parameter to limit 10. This value does not conform to the constraints of MySQL on the SQL WHERE clause.

No

No default value

Code for AnalyticDB for MySQL 3.0 Writer

{
"type": "job",
"steps": [
{
"stepType": "stream",
"parameter": {},
"name": "Reader",
"category": "reader"
},
{
"stepType": "analyticdb_for_mysql", // The plug-in name. 
"parameter": {
"postSql": [], // The SQL statement that you want to execute after the synchronization task is run. 
"tableType": null, // The reserved field. Default value: null. 
"datasource": "hangzhou_ads", // The name of the data source. 
"column": [ // The names of the columns. 
"id",
"value"
],
"guid": null,
"writeMode": "insert", // The write mode. For more information, see the description of the writeMode parameter. 
"batchSize": 2048, // The number of data records to write at a time. For more information, see the description of the batchSize parameter. 
"encoding": "UTF-8", // The encoding format. 
"table": "t5", // The name of the table to which you want to write data. 
"preSql": [] // The SQL statement that you want to execute before the synchronization task is run. 
},
"name": "Writer",
"category": "writer"
}
],
"version": "2.0", // The version number. 
"order": {
"hops": [
{
"from": "Reader",
"to": "Writer"
}
]
},
"setting": {
"errorLimit": {
"record": "0" // The maximum number of dirty data records allowed. 
},
"speed": {
"throttle":true,// Specifies whether to enable throttling. The value false indicates that throttling is disabled, and the value true indicates that throttling is enabled. The mbps parameter takes effect only when the throttle parameter is set to true. 
                        "concurrent":2, // The maximum number of parallel threads. 
                        "mbps":"12"// The maximum transmission rate. Unit: MB/s. 
}
}
}

Parameters in code for AnalyticDB for MySQL 3.0 Writer

Parameter

Description

Required

Default value

datasource

The name of the data source. It must be the same as the name of the added data source. You can add data sources by using the code editor.

Yes

No default value

table

The name of the table to which you want to write data.

Yes

No default value

writeMode

The write mode. Valid values: insert, replace, and update.

  • insert: If no primary key conflict or unique index conflict occurs, data is directly written to the destination table. If a primary key conflict or unique index conflict occurs, data that you want to write to the destination table is automatically ignored and no update is performed.

  • replace: If no primary key conflict or unique index conflict occurs, data is directly written to the destination table. If a primary key conflict or unique index conflict occurs, rows that contain the conflicting data are deleted and then new rows are inserted. This indicates that all fields of the original rows are replaced with new rows.

  • update: If no primary key conflict or unique index conflict occurs, data is directly written to the destination table. If a primary key conflict or unique index conflict occurs, all fields of the original rows are replaced with new rows.

    Note

    This mode is supported only in script mode.

No

insert

column

The names of the columns to which you want to write data. Separate the names with commas (,), such as "column": ["id", "name", "age"]. If you want to write data to all the columns in the destination table, set this parameter to an asterisk (*), such as "column": ["*"].

Note

If the column name contains select, enclose the column name in backticks (`). For example, item_select_no is presented as `item_select_no`.

Yes

No default value

preSql

The SQL statement that you want to execute before the synchronization task is run. For example, you can set this parameter to the SQL statement that is used to delete outdated data. You can execute only one SQL statement on the codeless UI and multiple SQL statements in the code editor.

Note

If you specify multiple SQL statements, the statements are not executed in the same transaction.

No

No default value

postSql

The SQL statement that you want to execute after the synchronization task is run. For example, you can set this parameter to the SQL statement that is used to add a timestamp. You can execute only one SQL statement on the codeless UI and multiple SQL statements in the code editor.

Note

If you specify multiple SQL statements, the statements are not executed in the same transaction.

No

No default value

batchSize

The number of data records to write at a time. Set this parameter to an appropriate value based on your business requirements. This greatly reduces the interactions between Data Integration and AnalyticDB for MySQL 3.0 and increases throughput. If you set this parameter to an excessively large value, an out of memory (OOM) error may occur during data synchronization.

No

1,024