Property (KEY) | Description | Valid value (VALUE) |
odps.sql.allow.fullscan | Specifies whether to enable a full table scan on a project. A full table scan occupies a large number of resources, which reduces data processing efficiency. Therefore, we recommend that you do not enable this feature. | |
odps.table.lifecycle | Specifies whether to configure a lifecycle for tables in a project. | optional: The lifecycle clause is optional in a table creation statement. If you do not configure a lifecycle for a table, the table does not expire. mandatory: The lifecycle clause is required in a table creation statement. inherit: If you do not configure a lifecycle for a table when you create the table, the value of odps.table.lifecycle.value is used by default.
|
odps.table.lifecycle.value | The lifecycle of a table. Unit: days. | 1 to 37231. Default value: 37231. |
odps.security.ip.whitelist | A whitelist of IP addresses that are authorized to access the project over the cloud product interconnection network. For more information, see Manage IP address whitelists. | A list of IP addresses that are separated by commas (,). |
odps.security.vpc.whitelist | A whitelist of IP addresses that are authorized to access the project over a specific virtual private cloud (VPC). For more information, see Manage IP address whitelists. | RegionID_VPCID[IP Address]. |
READ_TABLE_MAX_ROW | The maximum number of data records that can be returned by a SELECT statement. | 1 to 10000. Default value: 10000. |
odps.sql.type.system.odps2 | Specifies whether to enable the MaxCompute V2.0 data type edition. For more information about the MaxCompute V2.0 data type edition, see MaxCompute V2.0 data type edition. | |
odps.sql.hive.compatible | Specifies whether to enable the Hive-compatible data type edition. MaxCompute supports Hive syntaxes, such as inputRecordReader , outputRecordReader , and Serde , only after the Hive-compatible data type edition is enabled. For more information about the Hive-compatible data type edition, see Hive-compatible data type edition. | |
odps.sql.decimal.odps2 | Specifies whether to enable DECIMAL(precision,scale) in the MaxCompute V2.0 data type edition. For more information, see MaxCompute V2.0 data type edition. | |
odps.sql.metering.value.max | The upper limit on resources consumed by an SQL statement. For more information, see Consumption control. | N/A. |
odps.sql.timezone | The time zone of the MaxCompute project that you accessed. For more information about time zones, see Time zone configuration operations. | N/A. |
odps.sql.unstructured.oss.commit.mode | Specifies whether to enable the multipart upload feature of Object Storage Service (OSS) to write data to OSS external tables. For more information, see Write data to OSS. | |
odps.sql.groupby.orderby.position.alias | Specifies whether to use integer constants in the GROUP BY and ORDER BY clauses as column IDs in SELECT statements. Note If this parameter is set to true for an existing project, data parsing or other operations may fail to be performed. You must make sure that the original logic can be correctly executed for the existing project when you set this parameter to true. Otherwise, configure this parameter for sessions. | true: Integer constants in the GROUP BY and ORDER BY clauses can be used as column IDs in SELECT statements. false: Integer constants in the GROUP BY and ORDER BY clauses cannot be used as column IDs in SELECT statements.
|
odps.forbid.fetch.result.by.bearertoken | Specifies whether to display the results of jobs on the Result tab of Logview. This parameter is used to protect data security. | |
odps.cupidhistory.inprogress.remain.days | The retention duration of the logs that record the running history of a running Spark on MaxCompute job. Unit: days. | 1 to 7. Default value: 7. |
odps.cupidhistory.remain.days | The retention duration of the logs that record the running history of a Spark on MaxCompute job that has finished running. Unit: days. | 1 to 3. Default value: 3. |
odps.ext.oss.orc.native | Specifies whether to upgrade the Java library of the open source community to the C++ native library when a job in a project reads data from an external table to parse an ORC data file. The C++ native library can be used to parse later versions of ORC data files to significantly improve the parsing performance on open source data. | |
odps.ext.parquet.native | Specifies whether to upgrade the Java library of the open source community to the C++ native library when a job in a project reads data from an external table to parse a Parquet data file. The C++ native library can be used to significantly improve the parsing performance on open source data. After you upgrade the Java library of the open source community to the C++ native library, the number of accesses to the data source may increase if a large number of small Parquet data files and a large number of data columns exist. In this case, when you create a table, you can configure the parquet.file.cache.size and parquet.io.buffer.size properties in the WITH SERDEPROPERTIES clause to increase the amount of data that can be cached for each access to the data source. | |
odps.security.enabledownloadprivilege | Specifies whether to enable the download control feature. After you enable this feature, you can manage the permissions of users or roles to download tables or data of instances by using Tunnel commands. This helps improve the security of project data and prevents data leakage. For more information, see Download control. | |
odps.security.ip.whitelist.services | The Alibaba Cloud service whitelist of a MaxCompute project. If you add an Alibaba Cloud service such as DataHub or Simple Log Service to this whitelist, you no longer need to add the IP address of the service to the IP address whitelist of the MaxCompute project when you use the service to access the MaxCompute project later. | The value of this parameter is in the service1,service2 format. Before you can add a service name to the whitelist, you must register the service name with MaxCompute. For example, if you want to add Simple Log Service to the whitelist, set this parameter to AliyunLogSLRService,AliyunLogDefaultService . |