All Products
Search
Document Center

AnalyticDB:Config and hint configuration parameters

Last Updated:May 24, 2024

AnalyticDB for MySQL allows you to configure parameters by executing a SET ADB_CONFIG statement or specifying a hint. A SET ADB_CONFIG statement applies to the entire AnalyticDB for MySQL cluster and can be used to make global configurations. A hint applies only to a specific SQL statement and does not affect global configurations. If you use both a SET ADB_CONFIG statement and a hint to configure a specific configuration item, the hint takes precedence. This topic describes the common config and hint configuration parameters in AnalyticDB for MySQL.

Common config parameters

Configuration item

Parameter

Description

Example

References

Switchover window

REPLICATION_SWITCH_TIME_RANGE

The window during which the existing cluster is switched over to a new cluster. During a switchover to a new cluster, the existing cluster becomes read-only for a period of 5 to 10 minutes. After you connect to the existing cluster, you can use the REPLICATION_SWITCH_TIME_RANGE parameter to specify the switchover window.

Note

If you do not specify the switchover window, the existing cluster is switched over to a new cluster after the incremental data of the existing cluster is synchronized in real time to the new cluster.

SET ADB_CONFIG REPLICATION_SWITCH_TIME_RANGE=`23:00, 23:30`;

Scale a Data Warehouse Edition cluster

Maximum number of values specified by the IN operator

MAX_IN_ITEMS_COUNT

The maximum number of values that can be specified by the IN operator. Default value: 4000. The value must be an integer that is greater than 0.

SET ADB_CONFIG MAX_IN_ITEMS_COUNT=4000;

Writes and queries

Query timeout period

QUERY_TIMEOUT

The timeout period of all queries in a cluster. The value must be an integer that is greater than 0. Unit: milliseconds.

SET ADB_CONFIG QUERY_TIMEOUT=1000;

Timeout period of INSERT, UPDATE, and DELETE statements

INSERT_SELECT_TIMEOUT

The maximum execution duration of the INSERT, UPDATE, and DELETE statements in a cluster. Default value: 24 × 3600000. The value must be an integer that is greater than 0. Unit: milliseconds.

SET ADB_CONFIG INSERT_SELECT_TIMEOUT=3600000;

Limits

Filter conditions without pushdown

  • Minor version 3.1.4 or later: FILTER_NOT_PUSHDOWN_COLUMNS

  • Minor version earlier than 3.1.4: NO_INDEX_COLUMNS

Disables filter condition pushdown for specific columns in a cluster.

  • Minor version 3.1.4 or later: SET ADB_CONFIG FILTER_NOT_PUSHDOWN_COLUMNS=[Schema1.tableName1:colName1|colName2;Schema2.tableName2:colName1|colName2]

  • Minor version earlier than 3.1.4: SET ADB_CONFIG NO_INDEX_COLUMNS=[tableName1.colName1;colName2,tableName2.colName1]

Filter conditions without pushdown

Query execution mode

QUERY_TYPE

The query execution mode of a cluster. Valid values:

  • interactive

  • batch

Note

You cannot change the query execution mode for an AnalyticDB for MySQL Data Warehouse Edition (V3.0) cluster in reserved mode or an AnalyticDB for MySQL Data Lakehouse Edition (V3.0) cluster.

SET ADB_CONFIG QUERY_TYPE=interactive;

Query execution modes

Query queue

XIHE_ENV_QUERY_ETL_MAX_CONCURRENT_SIZE

The maximum number of queries that can be executed for LOWEST queues on a single frontend node. Default value: 20.

SET ADB_CONFIG XIHE_ENV_QUERY_ETL_MAX_CONCURRENT_SIZE=20;

Priority queues and concurrency control of interactive resource groups

XIHE_ENV_QUERY_ETL_MAX_QUEUED_SIZE

The maximum number of queries that can be queued for LOWEST queues on a single frontend node. Default value: 200.

SET ADB_CONFIG XIHE_ENV_QUERY_ETL_MAX_QUEUED_SIZE=200;

XIHE_ENV_QUERY_LOW_PRIORITY_MAX_CONCURRENT_SIZE

The maximum number of queries that can be executed for LOW queues on a single frontend node. Default value: 20.

SET ADB_CONFIG XIHE_ENV_QUERY_LOW_PRIORITY_MAX_CONCURRENT_SIZE=20;

XIHE_ENV_QUERY_LOW_PRIORITY_MAX_QUEUED_SIZE

The maximum number of queries that can be queued for LOW queues on a single frontend node. Default value: 200.

SET ADB_CONFIG XIHE_ENV_QUERY_LOW_PRIORITY_MAX_QUEUED_SIZE=200;

XIHE_ENV_QUERY_NORMAL_MAX_CONCURRENT_SIZE

The maximum number of queries that can be executed for NORMAL queues on a single frontend node. Default value: 20.

SET ADB_CONFIG XIHE_ENV_QUERY_NORMAL_MAX_CONCURRENT_SIZE=20;

XIHE_ENV_QUERY_NORMAL_MAX_QUEUED_SIZE

The maximum number of queries that can be queued for NORMAL queues on a single frontend node. Default value: 200.

SET ADB_CONFIG XIHE_ENV_QUERY_NORMAL_MAX_QUEUED_SIZE=200;

XIHE_ENV_QUERY_HIGH_MAX_CONCURRENT_SIZE

The maximum number of queries that can be executed for HIGH queues on a single frontend node. Default value: 40.

SET ADB_CONFIG XIHE_ENV_QUERY_HIGH_MAX_CONCURRENT_SIZE=40;

XIHE_ENV_QUERY_HIGH_MAX_QUEUED_SIZE

The maximum number of queries that can be queued for HIGH queues on a single frontend node. Default value: 400.

SET ADB_CONFIG XIHE_ENV_QUERY_HIGH_MAX_QUEUED_SIZE=400;

Data import by using Object Storage Service (OSS) external tables

HIVE_SPLIT_ASYNC_GENERATION_ENABLED

Specifies whether to asynchronously submit tasks that run splits to a queue for execution when Hive generates an execution plan. Valid values:

  • true

  • false (default)

Note

This parameter is supported only for AnalyticDB for MySQL clusters of V3.1.10.1 or later.

SET ADB_CONFIG HIVE_SPLIT_ASYNC_GENERATION_ENABLED=true;

Data import by using MaxCompute external tables

SQL_OUTPUT_BATCH_SIZE

The maximum number of entries that can be imported at a time. The value must be an integer that is greater than 0.

SET ADB_CONFIG SQL_OUTPUT_BATCH_SIZE = 6000;

Import and export

ENABLE_ODPS_MULTI_PARTITION_PART_MATCH

Specifies whether to obtain the number of entries in each MaxCompute partition in advance.

SET ADB_CONFIG ENABLE_ODPS_MULTI_PARTITION_PART_MATCH=false;

ASYNC_GET_SPLIT

Specifies whether to asynchronously submit tasks to a queue for execution by optimizing the split loading performance when MaxCompute generates an execution plan. Valid values:

  • true

  • false (default)

Note

This parameter is supported only for AnalyticDB for MySQL clusters of V3.1.10.1 or later.

SET ADB_CONFIG ASYNC_GET_SPLIT=true;

Automatic scheduling for BUILD jobs

RC_CSTORE_BUILD_SCHEDULE_PERIOD

The time range within which you want to schedule BUILD jobs. This parameter applies to job scheduling, not job execution. The start time and the end time must be integers. Valid values: 0 to 24.

SET ADB_CONFIG RC_CSTORE_BUILD_SCHEDULE_PERIOD=`6,8`;

Configure automatic scheduling for BUILD jobs

Scheduling priority of BUILD jobs

ADB_CONFIG RC_BUILD_TASK_PRIORITY_LIST

The scheduling priorities of BUILD jobs for one or more tables. The configured priorities are effective until you configure new scheduling priorities of BUILD jobs for the tables.

The task_priority parameter specifies the scheduling priority of BUILD jobs. The value must be an integer. Default value: 0.

SET ADB_CONFIG RC_BUILD_TASK_PRIORITY_LIST = `<db1_name>.<table1_name>.<task_priority>;<db2_name>.<table2_name>.<task_priority>`;

BUILD

Elastic import

RC_ELASTIC_JOB_SCHEDULER_ENABLE

Enables the elastic import feature.

SET ADB_CONFIG RC_ELASTIC_JOB_SCHEDULER_ENABLE=true;

REMOTE_CALL function

XIHE_REMOTE_CALL_SERVER_ENDPOINT

The internal endpoint of Function Compute used when you invoke the REMOTE_CALL function for user-defined functions (UDFs). For more information, see Endpoints.

SET ADB_CONFIG XIHE_REMOTE_CALL_SERVER_ENDPOINT="1234567890000****.cn-zhangjiakou-internal.fc.aliyuncs.com"

UDFs (invitational preview)

XIHE_REMOTE_CALL_SERVER_AK

The AccessKey ID of the Alibaba Cloud account or Resource Access Management (RAM) user to which the Function Compute service belongs used when you invoke the REMOTE_CALL function for UDFs.

SET ADB_CONFIG XIHE_REMOTE_CALL_SERVER_AK=LTAI5t8sqJn5GhpBVtN8****;

XIHE_REMOTE_CALL_SERVER_SK

The AccessKey secret of the Alibaba Cloud account or RAM user to which the Function Compute service belongs used when you invoke the REMOTE_CALL function for UDFs.

SET ADB_CONFIG XIHE_REMOTE_CALL_SERVER_SK=HlClegbiV5mJjBYBJHEZQOnRF7****;

XIHE_REMOTE_CALL_COMPRESS_ENABLED

Specifies whether to compress data into the GZIP format before sending the data to Function Compute when you invoke the REMOTE_CALL function for UDFs.

SET ADB_CONFIG REMOTE_CALL_COMPRESS_ENABLED=true;

XIHE_REMOTE_CALL_MAX_BATCH_SIZE

The maximum number of data rows that can be sent at a time to Function Compute when you invoke the REMOTE_CALL function for UDFs.

SET ADB_CONFIG XIHE_REMOTE_CALL_MAX_BATCH_SIZE=500000;

Split flow control

ADB_CONFIG SPLIT_FLOW_CONTROL_ENABLED

Specifies whether to enable the split flow control feature. AnalyticDB for MySQL allows you to configure the quota for concurrent split scans on a node to prevent node instability caused by excessive concurrent split scans and resource usage. Valid values:

  • true (default)

  • false

SET ADB_CONFIG SPLIT_FLOW_CONTROL_ENABLED=true;

Split flow control

ADB_CONFIG NODE_LEVEL_SPLIT_FLOW_CONTROL_ENABLED

Specifies whether to enable the feature that allows the quota for concurrent split scans in a task to be dynamically adjusted. AnalyticDB for MySQL allows you to dynamically adjust the quota for concurrent split scans in a task based on the overall quota for concurrent split scans of the node on which the task runs. Valid values:

  • true

  • false (default)

SET ADB_CONFIG NODE_LEVEL_SPLIT_FLOW_CONTROL_ENABLED=true;

MIN_RUNNING_SPLITS_LIMIT_PER_TASK

The minimum quota for concurrent split scans in a task. Default value: 1. The valid values range from 1 up to the value of the TARGET_RUNNING_SPLITS_LIMIT_PER_TASK parameter.

SET ADB_CONFIG MIN_RUNNING_SPLITS_LIMIT_PER_TASK=24;

TARGET_RUNNING_SPLITS_LIMIT_PER_TASK

The intermediate quota for concurrent split scans in a task. The actual quota is dynamically increased or decreased based on the value of this parameter. Default value: 32. The valid values range from the value of the MIN_RUNNING_SPLITS_LIMIT_PER_TASK parameter up to the value of the MAX_RUNNING_SPLITS_LIMIT_PER_TASK parameter.

TARGET_RUNNING_SPLITS_LIMIT_PER_TASK=32;

MAX_RUNNING_SPLITS_LIMIT_PER_TASK

The maximum quota for concurrent split scans in a task. Default value: 64. The value of this parameter must be greater than the value of the TARGET_RUNNING_SPLITS_LIMIT_PER_TASK parameter.

SET ADB_CONFIG MAX_RUNNING_SPLITS_LIMIT_PER_TASK=128;

ADB_CONFIG WORKER_MAX_RUNNING_SOURCE_SPLITS_PER_NODE

By default, the quota for concurrent split scans on a storage node is 256. We recommend that you do not change the default value because an excessively large or small quota can affect the cluster performance.

SET ADB_CONFIG WORKER_MAX_RUNNING_SOURCE_SPLITS_PER_NODE=256;

ADB_CONFIG EXECUTOR_MAX_RUNNING_SOURCE_SPLITS_PER_NODE

By default, the quota for concurrent split scans on a compute node is 256. We recommend that you do not change the default value because an excessively large or small quota can affect the cluster performance.

SET ADB_CONFIG EXECUTOR_MAX_RUNNING_SOURCE_SPLITS_PER_NODE=256;

Common hint parameters

Configuration item

Parameter

Description

Example

References

Query timeout period

QUERY_TIMEOUT

The timeout period of a single query. The value must be an integer that is greater than 0. Unit: milliseconds.

/*+ QUERY_TIMEOUT=1000 */select count(*) from t;

Writes and queries

Timeout period of INSERT, UPDATE, and DELETE statements

INSERT_SELECT_TIMEOUT

The maximum execution duration of the INSERT, UPDATE, and DELETE statements in a query. Default value: 24 × 3600000. The value must be an integer that is greater than 0. Unit: milliseconds.

/*+ INSERT_SELECT_TIMEOUT=3600000 */update customer set customer_name ='adb' where customer_id ='2369';

Limits

Filter conditions without pushdown

  • Minor version 3.1.4 or later: FILTER_NOT_PUSHDOWN_COLUMNS

  • Minor version earlier than 3.1.4: NO_INDEX_COLUMNS

Disables filter condition pushdown for specific columns in a query.

  • Minor version 3.1.4 or later: /*+ FILTER_NOT_PUSHDOWN_COLUMNS=[Schema1.table1:colName1|colName2;Schema2.table2:colName1|colName2] */

  • Minor version earlier than 3.1.4: /*+ NO_INDEX_COLUMNS=[tableName1.colName1;colName2,tableName2.colName1] */

Filter conditions without pushdown

Scheduling priority of BUILD jobs

build_task_priority

The scheduling priority of BUILD jobs. You can use a hint to configure the scheduling priority of BUILD jobs for only one table at a time. Default value: 0. The value must be an integer that is greater than or equal to 0. A greater value specifies a higher scheduling priority.

/*+ BUILD_TASK_PRIORITY = 2*/ BUILD TABLE db.table;

Configure scheduling priorities for BUILD jobs

Elastic import

  • elastic_load

  • elastic_load_configs

Selects elastic import and configures a resource group for the import job.

/*+ ELASTIC_LOAD=true, ELASTIC_LOAD_CONFIGS=[adb.load.resource.group.name=resource_group]*/

SUBMIT JOB INSERT OVERWRITE INTO adb_demo.adb_import_test SELECT * FROM adb_external_db.person;

Execution plan

  • Minor version 3.1.8 or later: O_CBO_RULE_SWAP_OUTER_JOIN

  • Minor version earlier than 3.1.8: LEFT_TO_RIGHT_ENABLED

The hash join algorithm uses the right table to build a hash table, and LEFT JOIN does not re-order the left and right tables. As a result, large amounts of data in the right table may cause issues such as slow execution and excessive memory consumption. You can add a hint before an SQL statement to configure an optimizer to change LEFT JOIN to RIGHT JOIN based on the resource consumption. Valid values:

  • true

  • false (default)

  • /*+ O_CBO_RULE_SWAP_OUTER_JOIN=true*/

  • /*+ LEFT_TO_RIGHT_ENABLED=true*/

Change LEFT JOIN to RIGHT JOIN

reorder_joins

Specifies whether to enable the automatic join order adjustment feature. By default, the automatic join order adjustment feature of AnalyticDB MySQL is enabled. After you disable this feature, you can manually adjust join orders based on the data characteristics of queries. This allows you to execute queries based on the join orders in the written SQL statements. Valid values:

  • true (default)

  • false

/*+ REORDER_JOINS=false*/

Manually adjust join orders

aggregation_path_type

Specifies whether to forcibly skip partial aggregation for a query. In grouping and aggregation scenarios that involve a large number of groups, two-step aggregation consumes a large number of computing resources. You can set the value of this hint parameter to single_agg to skip partial aggregation and directly perform final aggregation for a query. Valid values:

  • single_agg: forcibly skips partial aggregation.

  • auto (default): allows the optimizer to automatically determine whether to skip partial aggregation.

/*+ AGGREGATION_PATH_TYPE=single_agg*/

Grouping and aggregation query optimization

Runtime policy

hash_partition_count

The number of concurrent tasks into which each computing job can be divided. Task division helps make full use of computing resources. Default value: 32. The value must be an integer that is greater than 0.

/*+ HASH_PARTITION_COUNT=32*/

None

task_writer_count

The number of concurrent import jobs executed by using the INSERT INTO SELECT statement. If a storage node is overloaded, you can use this hint parameter to reduce the number of concurrent import jobs. Default value: 16. The value must be an integer that is greater than 0.

/*+ TASK_WRITER_COUNT=8*/

cte_execution_mode

Specifies whether to enable the common table expression (CTE) execution optimization feature. If a CTE subquery is referenced repeatedly after the CTE execution optimization feature is enabled, the subquery can be executed only once to improve the performance of specific queries. Valid values:

  • shared: enables the CTE execution optimization feature.

  • inline (default): disables the CTE execution optimization feature.

/*+ CTE_EXECUTION_MODE=shared*/

WITH

Limits on query conditions

query_max_shuffle_data_size_mb

The maximum amount of shuffled data for a query. By default, no error is returned regardless of the amount of queried data. If you specify this parameter and the amount of queried data in a query exceeds the specified value, the system terminates the query and returns an error. The value must be an integer that is greater than 0. Unit: MB.

/*+ QUERY_MAX_SHUFFLE_DATA_SIZE_MB=1024*/

None

max_select_items_count

The maximum number of SELECT conditions. Default value: 1024. The value must be an integer that is greater than 0.

/*+ MAX_SELECT_ITEMS_COUNT=2048*/

max_in_items_count

The maximum number of IN conditions. Default value: 4000. The value must be an integer that is greater than 0.

/*+ MAX_IN_ITEMS_COUNT=1000*/

max_where_items_count

The maximum number of WHERE conditions. Default value: 256. The value must be an integer that is greater than 0.

/*+ MAX_WHERE_ITEMS_COUNT=512*/

OSS external tables

output_filename

The name of the OSS object to which you want to export data. When you use an OSS external table to export data, you can add this hint before the data export statement to specify the name of an OSS object.

/*+ OUTPUT_FILENAME=output.csv*/

Export data to OSS

sql_output_oss_file_head_enable

Specifies whether to include column names in the data exported to OSS. By default, the data exported to OSS by using an OSS external table does not contain column names. If you want the data to have column names, add the /*+ SQL_OUTPUT_OSS_FILE_HEAD_ENABLE=true*/ hint before the data import statement. Valid values:

  • true

  • false (default)

/*+ SQL_OUTPUT_OSS_FILE_HEAD_ENABLE=true*/

hive_split_async_generation_enabled

Specifies whether to asynchronously submit tasks that run splits to a queue for execution when Hive generates an execution plan. Valid values:

  • true

  • false (default)

Note

This parameter is supported only for AnalyticDB for MySQL clusters of V3.1.10.1 or later.

/*+ HIVE_SPLIT_ASYNC_GENERATION_ENABLED=true*/

Use external tables to import data to Data Lakehouse Edition

fast_parquet_reader_enable

Specifies whether to roll back to the native Parquet reader if the statistics in a Parquet file are incorrect. Valid values:

  • true (default)

  • false

/*+ FAST_PARQUET_READER_ENABLE=true*/

oss_vectorized_parquet_reader_enable

Specifies whether to enable vectorized reading of Parquet files if the data does not contain NULL values. Valid values:

  • true

  • false (default)

/*+ OSS_VECTORIZED_PARQUET_READER_ENABLE=true*/

oss_vectorized_parquet_reader_null_scene_enable

Specifies whether to enable vectorized reading of Parquet files when the data contains NULL values. Valid values:

  • true

  • false (default)

/*+ OSS_VECTORIZED_PARQUET_READER_NULL_SCENE_ENABLE=true*/

hudi_metadata_table_enabled

Specifies whether to load Hudi metadata. Valid values:

  • true

  • false (default)

/*+ HUDI_METADATA_TABLE_ENABLED=true*/

hudi_query_type

The mode in which you want to query Hudi tables. Valid values:

  • snapshot: queries all data that is submitted or compressed at a specific point in time.

  • read_optimized (default): queries the data after the most recent submission or compression.

/*+ HUDI_QUERY_TYPE=snapshot*/

hudi_realtime_skip_merge

Specifies whether to skip the merging of base files and log files when Hudi tables are queried in snapshot mode. Valid values:

  • true

  • false (default)

/*+ HUDI_REALTIME_SKIP_MERGE=true*/

hudi_max_split_size

The maximum size of a split. Default value: 128. Unit: MB.

/*+ HUDI_MAX_SPLIT_SIZE=128*/

hudi_split_parallel_generation_enabled

Specifies whether to generate splits in parallel. Valid values:

  • true (default)

  • false

/*+ HUDI_SPLIT_PARALLEL_GENERATION_ENABLED=true*/

hudi_data_skipping_enabled

Specifies whether to use the statistics in Hudi MetaTable to skip splits that do not need to be read to improve the query performance. Valid values:

  • true

  • false (default)

/*+ HUDI_DATA_SKIPPING_ENABLED=true*/

hudi_split_async_generation_enabled

Specifies whether to asynchronously generate Hudi splits during the generation of an execution plan. Valid values:

  • true (default)

  • false

/*+ HUDI_SPLIT_ASYNC_GENERATION_ENABLED=true*/

ApsaraDB RDS for MySQL external tables

jdbc_scan_splits

The number of splits that are involved in a Java Database Connectivity (JDBC)-based TableScan operation. Valid values: 1 to 100. Default value: 1.

/*+ JDBC_SCAN_SPLITS=1*/

Use external tables to import data to Data Lakehouse Edition

jdbc_split_column

The name of the column that is used to divide splits when you read an ApsaraDB RDS for MySQL external table.

/*+ JDBC_SPLIT_COLUMN=column1*/

jdbc_insert_directly

Specifies whether to directly write data to the destination table. By default, when you use an ApsaraDB RDS for MySQL external table to write data, data is written first to the temporary table and then to the destination table. You can use the /*+ JDBC_INSERT_DIRECTLY=true*/ hint to configure data to be directly written to the destination table. However, if errors occur during the writing process, the destination table has dirty data. Valid values:

  • true

  • false (default)

/*+ JDBC_INSERT_DIRECTLY=true*/

Tablestore external tables

ots-insert-as-update

Specifies whether to replace the UPDATE statement with the INSERT statement. Valid values:

  • true

  • false (default)

/*+ OTS-INSERT-AS-UPDATE=true*/

Import data from Tablestore

MaxCompute external tables

odps_split_limit

The maximum number of splits allowed when you use Tunnel to read data. Valid values: 1 to 1000. Default value: 1000.

/*+ ODPS_SPLIT_LIMIT=1000*/

Use external tables to import data to Data Lakehouse Edition

ignore_partition_cache

Specifies whether to directly query the partition metadata from the MaxCompute table instead of using the cached partition metadata if a query is executed based on partitioning conditions.

  • true

  • false (default)

/*+ IGNORE_PARTITION_CACHE=true*/

rows_per_split

The maximum number of rows that a single split can contain when you use Tunnel to read data. Valid values: 10000 to 500000. Default value: 500000.

/*+ ROWS_PER_SPLIT=500000*/

storage_api_enabled

Specifies whether to use MaxCompute storage API operations to read MaxCompute data. Compared with the traditional Tunnel service, MaxCompute Storage API greatly improves the read performance.

Valid values:

  • true

  • false (default)

/*+ STORAGE_API_ENABLED=true*/

split_byte_size

The maximum size of a single split when you use MaxCompute Storage API to read data. Default value: 256. Unit: MB.

/*+ APLIT_BYTE_SIZE=256*/

max_batch_row_count

The maximum number of rows that a single split can contain when you use MaxCompute Storage API to read data. Default value: 1024.

/*+ MAX_BATCH_ROW_COUNT=1024*/

page_batch_size

The maximum number of rows that each page can contain when you use MaxCompute Storage API to read data. Default value: 1024.

/*+ PAGE_BATCH_SIZE=1024*/

max_allocation_per_split

The maximum peak memory that can be allocated at the split level when you use MaxCompute Storage API to read data. Default value: 300. Unit: MB.

/*+ MAX_ALLOCATION_PER_SPLIT=300*/

async_get_split

Specifies whether to asynchronously submit tasks to a queue for execution by optimizing the split loading performance when MaxCompute generates an execution plan. Valid values:

  • true

  • false (default)

Note

This parameter is supported only for AnalyticDB for MySQL clusters of V3.1.10.1 or later.

/*+ ASYNC_GET_SPLIT=true*/

Split flow control

SPLIT_FLOW_CONTROL_ENABLED

Specifies whether to enable the split flow control feature. AnalyticDB for MySQL allows you to configure the quota for concurrent split scans on a node to avoid node instability caused by excessive concurrent split scans and resource usage. Valid values:

  • true (default)

  • false

/*+ SPLIT_FLOW_CONTROL_ENABLED=true*/ SELECT * FROM table;

Split flow control

MIN_RUNNING_SPLITS_LIMIT_PER_TASK

The minimum quota for concurrent split scans in a task. Default value: 1. The valid values range from 1 up to the value of the TARGET_RUNNING_SPLITS_LIMIT_PER_TASK parameter.

/*+ MIN_RUNNING_SPLITS_LIMIT_PER_TASK=10*/SELECT * FROM orders;

TARGET_RUNNING_SPLITS_LIMIT_PER_TASK

The intermediate quota for concurrent split scans in a task. The actual quota is dynamically increased or decreased based on the value of this parameter. Default value: 32. The valid values range from the value of the MIN_RUNNING_SPLITS_LIMIT_PER_TASK parameter up to the value of the MAX_RUNNING_SPLITS_LIMIT_PER_TASK parameter.

/*+ TARGET_RUNNING_SPLITS_LIMIT_PER_TASK=32*/SELECT * FROM adb_test;

MAX_RUNNING_SPLITS_LIMIT_PER_TASK

The maximum quota for concurrent split scans in a task. Default value: 64. The value of this parameter must be greater than the value of the TARGET_RUNNING_SPLITS_LIMIT_PER_TASK parameter.

/*+ MAX_RUNNING_SPLITS_LIMIT_PER_TASK=100*/SELECT * FROM adb_test;

Other

resource_group

The resource group used by the query. If you do not specify a resource group, the resource group that is associated with the database account is used. If the database account is not associated with a resource group, the default resource group is used.

/*+ RESOURCE_GROUP=my_group_name*/

None

dml_exception_to_null_enabled

Specifies whether to write NULL values without throwing an exception if the data written by the INSERT SELECT statement contains an invalid value. Valid values:

  • true: writes NULL values without throwing an exception.

  • false (default): throws an exception.

/*+ DML_EXCEPTION_TO_NULL_ENABLED=true*/

display_hint

The identifier of the query, such as a string that specifies the user who executes the query.

/*+ DISPLAY_HINT=from_user_123*/