This topic explains how to create and drop tables using SQL statements.
Type | Feature | Role | Access Point |
Create a non-partitioned table, a partitioned table, a foreign table, or a clustered table. | Users who have the CREATE TABLE permission on a project. | You can execute the statements described in this topic on the following platforms: | |
Drop a partitioned table or a non-partitioned table. | Users who have the Drop permission on tables. |
Create Table
Create various types of tables, including non-partitioned, partitioned, foreign, and clustered tables.
Limits
A partitioned table can have up to six levels of partitions, such as
year/month/week/day/hour/minute
.By default, a table can have up to 60,000 partitions, which can be adjusted according to business needs.
For detailed information on table limits, see Limits on SQL Usage.
Command Format
Create an internal table.
CREATE [OR REPLACE] TABLE [IF NOT EXISTS] <table_name> ( <col_name> <data_type>, ... ) [comment <table_comment>] [PARTITIONED BY (<col_name> <data_type> [comment <col_comment>], ...)];
Create a clustered table.
CREATE TABLE [IF NOT EXISTS] <table_name> ( <col_name> <data_type>, ... ) [CLUSTERED BY | RANGE CLUSTERED BY (<col_name> [, <col_name>, ...]) [SORTED BY (<col_name> [ASC | DESC] [, <col_name> [ASC | DESC] ...])] INTO <number_of_buckets> buckets];
Create a foreign table.
For instance, to create an OSS foreign table using the built-in text data parser, refer to Create an OSS Foreign Table.
CREATE EXTERNAL TABLE [IF NOT EXISTS] <mc_oss_extable_name> ( <col_name> <data_type>, ... ) STORED AS '<file_format>' [WITH SERDEPROPERTIES (options)] LOCATION '<oss_location>';
Create a table and specify its type.
Designate the table as transactional to enable updates or deletions post-creation, subject to certain restrictions. Consider your business requirements when creating a transactional table.
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] <table_name> ( <col_name <data_type> [NOT NULL] [DEFAULT <default_value>] [comment <col_comment>], ... [PRIMARY KEY (<pk_col_name>[, <pk_col_name2>, ...] )]) [comment <table_comment>] [TBLPROPERTIES ("transactional"="true")];
Designate the table as a Delta table to utilize primary key operations, such as upserts, incremental queries, and time travel queries.
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] <table_name> ( <col_name <data_type> [NOT NULL] [DEFAULT <default_value>] [comment <col_comment>], ... [PRIMARY KEY (<pk_col_name>[, <pk_col_name2>, ...] )]) [comment <table_comment>] [TBLPROPERTIES ("transactional"="true" [, "write.bucket.num" = "N", "acid.data.retain.hours"="hours"...])] [lifecycle <days>];
Create a table based on an existing table.
Replicate data from an existing table to a new one, excluding partition properties. This can be done for foreign tables or tables from external projects implementing the data lakehouse solution.
CREATE TABLE [IF NOT EXISTS] <table_name> [LIFECYCLE <days>] AS <select_statement>;
Create a table with the same schema as an existing table without replicating its data. This applies to foreign tables or tables from external projects implementing the data lakehouse solution.
CREATE TABLE [IF NOT EXISTS] <table_name> [LIFECYCLE <days>] LIKE <existing_table_name>;
Parameter Description
General Parameters
Parameter | Required | Description | Remarks |
OR REPLACE | No | If the table specified by <table_name> already exists, you can execute the DROP TABLE statement for the table, and then create a table with the same name for replacement. | You can use this parameter instead of the following statements:
Note Limits: You cannot use CREATE OR REPLACE TABLE together with the following syntaxes:
|
EXTERNAL | No | Specifies that the table to be created is an external table. | None |
IF NOT EXISTS | No | Specifies whether a table with the same name exists in the database. | If you do not specify the IF NOT EXISTS option and a table with the same name exists, an error is returned. If you specify the IF NOT EXISTS option, a success message is returned regardless of whether a table with the same name exists. The metadata of the existing table remains unchanged. |
table_name | Yes | The name of the table. | The table name is not case-sensitive and cannot contain special characters. The name can contain letters, digits, and underscores (_). The name must start with a letter and cannot exceed 128 bytes in length. If the value of this parameter does not meet the requirements, an error is returned. |
PRIMARY KEY(pk) | No | The primary key of the table. | You can specify one or more columns as the primary key. This indicates that the combination of these columns must be unique in the table. You must comply with the standard SQL syntax for primary keys. The columns that are defined as the primary key must be set to not null and cannot be modified. Important This parameter is only used for Delta Tables. |
col_name | Yes | The name of a table column. | The column name is not case-sensitive and cannot contain special characters. The name can contain letters, digits, and underscores (_). The name must start with a letter and cannot exceed 128 bytes in length. If the value of this parameter does not meet the requirements, an error is returned. |
col_comment | No | The comment of a column. | The comment must be a valid string that does not exceed 1,024 bytes in length. If the value of this parameter does not meet the requirements, an error is returned. |
data_type | Yes | The data type of the column. | Data types include BIGINT, DOUBLE, BOOLEAN, DATETIME, DECIMAL, and STRING. For more information, see Data Type Version Description. |
NOT NULL | No | The NOT NULL attribute can be configured in the CREATE TABLE syntax to specify that the values in a specific column cannot be NULL. | For more information about how to modify the NOT NULL attribute, see Partition Operations. |
default_value | No | The default value for the column. | When the Note The current default value does not support functions such as |
table_comment | No | The comment of the table. | The comment must be a valid string that does not exceed 1,024 bytes in length. If the value of this parameter does not meet the requirements, an error is returned. |
LIFECYCLE | No | The lifecycle of the table. | The value must be a positive integer. Unit: days.
|
Parameters for partitioned tables
Parameter | Required | Description | Remarks |
PARTITIONED BY | Yes | The partition fields of a partitioned table. | None |
col_name | Yes | The name of a partition key column | The column name is not case-sensitive and cannot contain special characters. The name can contain letters, digits, and underscores (_). The name must start with a letter and cannot exceed 128 bytes in length. If the value of this parameter does not meet the requirements, an error is returned. |
data_type | Yes | The data type of a partition key column | In the MaxCompute V1.0 data type edition, partition key columns must be of the STRING type. In the MaxCompute V2.0 data type edition, partition key columns can be of the TINYINT, SMALLINT, INT, BIGINT, VARCHAR, or STRING type. For more information, see Data Type Version Description. If you use a partition field to partition a table, a full table scan is not required when you add partitions, update partition data, or read partition data. This improves the efficiency of data processing. |
col_comment | No | The comment of a partition key column | The comment must be a valid string that does not exceed 1,024 bytes in length. If the value of this parameter does not meet the requirements, an error is returned. |
Partition key column values must not contain double-byte characters, such as Chinese characters. They should start with a letter and can include letters, digits, and certain special characters, ranging from 1 to 255 bytes in length. Supported special characters include spaces, colons (:), underscores (_), dollar signs ($), number signs (#), periods (.), exclamation points (!), and at signs (@). The behavior of other characters, like escape characters \t
, \n
, and /
, is undefined.
Clustered table parameters
Clustered tables are categorized into hash-clustered and range-clustered tables.
Hash-clustered Tables
Parameter | Required | Description | Remarks |
CLUSTERED BY | Yes | The hash key. | MaxCompute performs a hash operation on the specified columns and distributes data to each bucket based on the hash values. To avoid data skew and hot spots and ensure optimal parallel execution, the |
SORTED BY | Yes | The sequence of fields in a bucket. | We recommend that you specify the same columns for SORTED BY and CLUSTERED BY to ensure optimal performance. In addition, after you specify the SORTED BY clause, MaxCompute automatically generates an index and uses the index to accelerate query execution. |
number_of_buckets | Yes | The number of hash buckets. | This parameter is required and the value of this parameter varies based on the amount of data. By default, MaxCompute supports a maximum of 1,111 reducers. This means that MaxCompute supports a maximum of 1,111 hash buckets. You can use |
For optimal performance, consider the following when specifying the number of hash buckets:
Aim for each hash bucket to be approximately 500 MB. For example, if the partition size is around 500 GB, we recommend setting 1,000 buckets. For larger data volumes, bucket sizes can range from 2 GB to 3 GB. To increase the maximum number of hash buckets beyond 1,111, use
set odps.stage.reducer.num=<concurrency>;
.In scenarios where
join
operations are optimized, removing the shuffle and sort steps can significantly improve performance. Therefore, the number of hash buckets of a table must be a multiple of the number of hash buckets of the other table. For example, one table has 256 hash buckets and the other table has 512 hash buckets. We recommend that you specify the number of hash buckets as 2n, such as 512, 1,024, 2,048, or 4,096. This way, the system can automatically split and merge hash buckets and remove the shuffle and sort steps to improve execution efficiency.
Range-clustered Tables
Parameter | Required | Description | Remarks |
RANGE CLUSTERED BY | Yes | The range-clustered columns. | MaxCompute performs the bucket operation on the specified columns and distributes data to each bucket based on the bucket ID. |
SORTED BY | Yes | The sequence of fields in a bucket. | You can use this parameter in the same way as you use it for a hash-clustered table. |
number_of_buckets | Yes | The number of hash buckets. | Range-clustered tables do not have the best practices of 2n for hash-clustered tables. If data is evenly distributed, you can specify any number of buckets. If you do not specify the number of buckets in a range-clustered table, MaxCompute automatically determines the optimal number based on the amount of data. |
For range-clustered tables, JOIN and AGGREGATE operations can be optimized when the join or group key is the range-clustered key or its prefix. To disable shuffling and enhance efficiency, manage flags using set odps.optimizer.enable.range.partial.repartitioning=true/false;
. By default, these flags are disabled.
Clustered tables optimize aspects such as:
Bucket pruning.
Aggregation.
Storage.
Restrictions on clustered tables include the following:
The
INSERT INTO
statement is not supported; useINSERT OVERWRITE
instead.Tunnel commands cannot import data into range-clustered tables as they do not arrange data in order.
Data backup and restoration features are not supported.
Parameters for foreign tables
The following example uses parameters for creating an OSS foreign table. For guidance on creating foreign tables, see External Data Sources.
Parameter | Required | Description |
| Yes | Specifies the file_format based on the data format of the foreign table. |
| No | The parameters related to the authorization, compression, and character parsing of the foreign table. |
oss_location | Yes | The OSS storage location of the data in the foreign table. For more information, see Create an OSS Foreign Table. |
Transaction table and delta table parameters
Parameters for Delta Tables
Delta tables support features like near real-time reads and writes, incremental reads and writes, incremental storage, and real-time updates. Only Delta tables with a primary key are supported.
Parameter | Required | Description | Remarks |
PRIMARY KEY(PK) | Yes | This parameter is required when you create a Delta table. You can specify multiple columns as the primary key. | You must comply with the standard SQL syntax for primary keys. The columns that are defined as the primary key must be set to not null and cannot be modified. After you specify a primary key for a Delta table, duplicate data is removed from the table based on the primary key. The uniqueness constraint for the primary key column is valid in a single partition or in a non-partitioned table. |
transactional | Yes | This parameter is required when you create a Delta table. You must set this parameter to true. | The true value indicates that the table complies with the transaction characteristics of MaxCompute atomicity, consistency, isolation, durability (ACID) tables and the Multi Version Concurrency Control (MVCC) model is used to support snapshot isolation. |
write.bucket.num | No | The default value is 16. Valid values: | This parameter indicates the number of buckets in a partition of a partitioned table or in a non-partition table. This parameter also specifies the number of concurrent nodes that are used to write data. You can change the value of this parameter for a partitioned table. If partitions are added to a partitioned table, the configuration of this parameter automatically takes effect on the new partitions. You cannot change the value of this parameter for a non-partitioned table. Take note of the following points:
|
acid.data.retain.hours | No | The default value is 24. Valid values: | The time range during which the historical data status can be queried by using the time travel feature. Unit: hours. If you need to query the historical data status for a period more than 168 hours (7 days), contact MaxCompute technical support.
|
acid.incremental.query.out.of.time.range.enabled | No | The default value is | If you set this parameter to true, the value of the endTimestamp property specified by an incremental query can be a point in time that is later than the maximum commit time of data in a table. If the value of the endTimestamp property is greater than the current time, new data may be inserted into a Delta table, and you may obtain different results for multiple queries. You can change the value of this parameter for a table. |
acid.write.precombine.field | No | You can use this parameter to specify the name of only one column. | If you specify a column name, the system performs data deduplication based on the primary key (PK) column in the file that is committed together with this parameter. This ensures data uniqueness and consistency. Note If the size of data that is committed at a time exceeds 128 MB, multiple files are generated. This parameter cannot be used for multiple files. |
General parameter requirements for Delta tables:
LIFECYCLE: The table's lifecycle must be at least as long as the time travel query period, meaning
lifecycle >= acid.data.retain.hours / 24
. When creating a table, MaxCompute verifies the specified lifecycle and returns an error if it doesn't meet the requirements.Unsupported features for Delta tables include CLUSTER BY and CREATE TABLE AS statements, and they cannot be used as external tables.
Additional restrictions:
Only MaxCompute SQL can directly interact with Delta tables.
Existing common tables cannot be converted to Delta tables.
The schema of the primary key column in a Delta table cannot be altered.
Parameters for transaction tables
Parameter | Required | Description |
TBLPROPERTIES(transactional"="true") | Yes | Set the table to a transactional table. You can subsequently perform |
Transactional tables have specific limitations:
MaxCompute only allows setting a table as transactional during its creation. Subsequent changes to make a table transactional using the
ALTER TABLE
statement will result in an error:ALTER TABLE not_txn_tbl SET tblproperties("transactional"="true"); -- An error is returned. FAILED: Catalog Service Failed, ErrorCode: 151, Error Message: Set transactional is not supported
You cannot set clustered or external tables as transactional tables when creating them.
Conversion between transactional tables and MaxCompute internal, external, or clustered tables is not supported.
Transactional table files must be merged manually. For details, see Merge Transactional Table Files.
The
merge partition
operation is not supported.Certain limitations apply when accessing transactional tables from other systems' jobs. For instance, Graph jobs cannot read from or write to transactional tables, while Spark or PAI jobs can only read from them.
Before updating or deleting critical data in a transactional table, or performing
update
,delete
, orinsert overwrite
operations, ensure to manually back up the data to another table usingselect
andinsert
operations.
Table creation parameters based on existing data tables
Use the
CREATE TABLE [IF NOT EXISTS] <table_name> [LIFECYCLE <days>] AS <select_statement>;
statement to create a new table and replicate data from an existing table.Partition properties and the lifecycle of the source table are not replicated. Partition key columns of the source table become regular columns in the new table.
The lifecycle parameter can be used to set the table's retention period. This statement can also be used to create an internal table with data replicated from an external table.
Execute the
CREATE TABLE [IF NOT EXISTS] <table_name> [LIFECYCLE <days>] LIKE <existing_table_name>;
statement to create a table with the same schema as an existing table.However, data and the lifecycle property of the source table are not replicated.
The lifecycle parameter can be used to set the table's retention period. This statement can also be used to create an internal table with the same schema as an existing external table.
Examples
Create a non-partitioned table.
CREATE TABLE test1 (key STRING);
Create a partitioned table.
CREATE TABLE IF NOT EXISTS sale_detail( shop_name STRING, customer_id STRING, total_price DOUBLE) PARTITIONED BY (sale_date STRING, region STRING);
Create a table to replace an existing table.
Create the existing table
mytable
and write data to it.CREATE OR REPLACE TABLE my_table(a bigint); INSERT INTO my_table(a) VALUES (1),(2),(3);
Use OR REPLACE to create a table with the same name and modify its fields.
CREATE OR REPLACE TABLE my_table(b string);
Query the
my_table
table to see the changes.+------------+ | b | +------------+ +------------+
The following SQL statements are invalid:
CREATE OR REPLACE TABLE IF NOT EXISTS my_table(b STRING); CREATE OR REPLACE TABLE my_table AS SELECT; CREATE OR REPLACE TABLE my_table LIKE newtable;
Create a table, replicate data from an existing table, and set a lifecycle for the new table.
-- Create a new table sale_detail_ctas1, replicate data from sale_detail to sale_detail_ctas1, and configure the lifecycle for the new table. SET odps.sql.allow.fullscan=true; CREATE TABLE sale_detail_ctas1 LIFECYCLE 10 AS SELECT * FROM sale_detail;
Run the
DESC EXTENDED sale_detail_ctas1;
command to view table details like schema and lifecycle.NoteThe
sale_detail
table is partitioned, yet theCREATE TABLE ... AS select_statement ...
statement generates thesale_detail_ctas1
table without inheriting the partition attributes of the original table. In the new table, the partition key columns from the source table are treated as regular columns. As a result, thesale_detail_ctas1
table is a standard table with five columns and no partitioning.Create a table using constants as column values in the SELECT clause.
NoteWhen using constants as column values in the
SELECT
clause, it's recommended to specify column names. Otherwise, columns like the fourth and fifth in the created table sale_detail_ctas3 will default to names such as_c4
and_c5
.Specify column names.
SET odps.sql.allow.fullscan=true; CREATE TABLE sale_detail_ctas2 AS SELECT shop_name, customer_id, total_price, '2013' AS sale_date, 'China' AS region FROM sale_detail;
Do not specify column names.
SET odps.sql.allow.fullscan=true; CREATE TABLE sale_detail_ctas3 AS SELECT shop_name, customer_id, total_price, '2013', 'China' FROM sale_detail;
Create a table with the same schema as an existing table and set a lifecycle for the new table.
CREATE TABLE sale_detail_like LIKE sale_detail LIFECYCLE 10;
Use the
DESC EXTENDED sale_detail_like;
command to view details like schema and lifecycle.NoteThe schema of the sale_detail_like table matches that of the sale_detail table, including properties like column names, comments, and table comments, except for the lifecycle. Data from the sale_detail table is not replicated to the sale_detail_like table.
Create a table with the same schema as a foreign table.
-- Create a new table mc_oss_extable_orc_like that uses the same schema as the mc_oss_extable_orc foreign table. CREATE TABLE mc_oss_extable_orc_like LIKE mc_oss_extable_orc;
Run the
DESC mc_oss_extable_orc_like;
command to view the schema details.+------------------------------------------------------------------------------------+ | Owner: ALIYUN$****@***.aliyunid.com | Project: max_compute_7u************yoq | | TableComment: | +------------------------------------------------------------------------------------+ | CreateTime: 2022-08-11 11:10:47 | | LastDDLTime: 2022-08-11 11:10:47 | | LastModifiedTime: 2022-08-11 11:10:47 | +------------------------------------------------------------------------------------+ | InternalTable: YES | Size: 0 | +------------------------------------------------------------------------------------+ | Native Columns: | +------------------------------------------------------------------------------------+ | Field | Type | Label | Comment | +------------------------------------------------------------------------------------+ | id | string | | | | name | string | | | +------------------------------------------------------------------------------------+
Create a table using new data types.
SET odps.sql.type.system.odps2=true; CREATE TABLE test_newtype ( c1 TINYINT, c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL, c8 BINARY, c9 TIMESTAMP, c10 ARRAY<MAP<BIGINT,BIGINT>>, c11 MAP<STRING,ARRAY<BIGINT>>, c12 STRUCT<s1:STRING,s2:BIGINT>, c13 VARCHAR(20)) LIFECYCLE 1;
Create a hash-clustered non-partitioned table.
CREATE TABLE t1 (a STRING, b STRING, c BIGINT) CLUSTERED BY (c) SORTED BY (c) INTO 1024 buckets;
Create a hash-clustered partitioned table.
CREATE TABLE t2 (a STRING, b STRING, c BIGINT) PARTITIONED BY (dt STRING) CLUSTERED BY (c) SORTED BY (c) into 1024 buckets;
Create a range-clustered non-partitioned table.
CREATE TABLE t3 (a STRING, b STRING, c BIGINT) RANGE CLUSTERED BY (c) SORTED BY (c) INTO 1024 buckets;
Create a range-clustered partitioned table.
CREATE TABLE t4 (a STRING, b STRING, c BIGINT) PARTITIONED BY (dt STRING) RANGE CLUSTERED BY (c) SORTED BY (c);
Create a transactional non-partitioned table.
CREATE TABLE t5(id bigint) tblproperties("transactional"="true");
Create a transactional partitioned table.
CREATE TABLE IF NOT EXISTS t6(id bigint) PARTITIONED BY (ds string) tblproperties ("transactional"="true");
Create a non-partitioned table and set default values for table fields.
CREATE TABLE test_default( tinyint_name tinyint NOT NULL default 1Y, smallint_name SMALLINT NOT NULL DEFAULT 1S, int_name INT NOT NULL DEFAULT 1, bigint_name BIGINT NOT NULL DEFAULT 1, binary_name BINARY , float_name FLOAT , double_name DOUBLE NOT NULL DEFAULT 0.1, decimal_name DECIMAL(2, 1) NOT NULL DEFAULT 0.0BD, varchar_name VARCHAR(10) , char_name CHAR(2) , string_name STRING NOT NULL DEFAULT 'N', boolean_name BOOLEAN NOT NULL DEFAULT TRUE );
Create an internal table and replicate data from a partitioned external table. Partition properties are not replicated.
Create an OSS foreign table and a MaxCompute internal table.
-- Create an OSS foreign table and insert data into the table. CREATE EXTERNAL table max_oss_test(a int, b int, c int) stored AS TEXTFILE location "oss://oss-cn-hangzhou-internal.aliyuncs.com/<bucket_name>"; INSERT INTO max_oss_test VALUES (101, 1, 20241108), (102, 2, 20241109), (103, 3, 20241110); SELECT * FROM max_oss_test; -- Result a b c 101 1 20241108 102 2 20241109 103 3 20241110 -- Execute the CREATE TABLE AS statement to create an internal table CREATE TABLE from_exetbl_oss AS SELECT * FROM max_oss_test; -- Query data of the internal table SELECT * FROM from_exetbl_oss; -- All data in the internal table is returned a b c 101 1 20241108 102 2 20241109 103 3 20241110
Query the internal table
DESC from_exetbl_as_par;
to see the schema.+------------------------------------------------------------------------------------+ | Owner: ALIYUN$*********** | | Project: ***_*****_*** | | TableComment: | +------------------------------------------------------------------------------------+ | CreateTime: 2023-01-10 15:16:33 | | LastDDLTime: 2023-01-10 15:16:33 | | LastModifiedTime: 2023-01-10 15:16:33 | +------------------------------------------------------------------------------------+ | InternalTable: YES | Size: 919 | +------------------------------------------------------------------------------------+ | Native Columns: | +------------------------------------------------------------------------------------+ | Field | Type | Label | Comment | +------------------------------------------------------------------------------------+ | a | string | | | | b | string | | | | c | string | | | +------------------------------------------------------------------------------------+
Create an internal table and replicate data from a partitioned external table, including partition properties.
Create the internal table
from_exetbl_like
.-- Query the external table of the data lakehouse solution from the MaxCompute side SELECT * FROM max_oss_test; -- Result a b c 101 1 20241108 102 2 20241109 103 3 20241110 -- Execute the CREATE TABLE LIKE statement to create an internal table CREATE TABLE from_exetbl_like LIKE max_oss_test; -- Query data of the internal table SELECT * FROM from_exetbl_like; -- Only the schema of the internal table is returned a b c
Query the internal table
DESC from_exetbl_like;
to see the schema.+------------------------------------------------------------------------------------+ | Owner: ALIYUN$************ | | Project: ***_*****_*** | | TableComment: | +------------------------------------------------------------------------------------+ | CreateTime: 2023-01-10 15:09:47 | | LastDDLTime: 2023-01-10 15:09:47 | | LastModifiedTime: 2023-01-10 15:09:47 | +------------------------------------------------------------------------------------+ | InternalTable: YES | Size: 0 | +------------------------------------------------------------------------------------+ | Native Columns: | +------------------------------------------------------------------------------------+ | Field | Type | Label | Comment | +------------------------------------------------------------------------------------+ | a | string | | | | b | string | | | +------------------------------------------------------------------------------------+ | Partition Columns: | +------------------------------------------------------------------------------------+ | c | string | | +------------------------------------------------------------------------------------+
Create a Delta table.
CREATE TABLE mf_tt (pk bigint NOT NULL PRIMARY KEY, val bigint) tblproperties ("transactional"="true");
Create a Delta table and configure the main table properties.
CREATE TABLE mf_tt2 ( pk bigint NOT NULL, pk2 bigint NOT NULL, val bigint, val2 bigint, PRIMARY KEY (pk, pk2) ) tblproperties ( "transactional"="true", "write.bucket.num" = "64", "acid.data.retain.hours"="120" ) lifecycle 7;
Drop Table
This section covers how to drop both partitioned and non-partitioned tables.
Precautions
Exercise caution when dropping a table. Ensure the table is eligible for deletion. If a table is dropped by mistake, it may be recoverable if the backup and restoration feature is active for the project and the table is dropped within the specified backup data retention period. For more information about backup and restoration, see Backup and Restoration.
Dropping a table will reduce the amount of data stored in a MaxCompute project.
Command Format
DROP TABLE [IF EXISTS] <table_name>;
Parameter Description
Parameter | Required | Description |
IF EXISTS | No | If you do not specify the IF EXISTS option and the table does not exist, an error is returned. If you specify the IF EXISTS option, a success message is returned regardless of whether the table exists. |
table_name | Yes | The name of the table that you want to drop. |
Examples
-- Drop the sale_detail table. A success message is returned regardless of whether the sale_detail table exists.
DROP TABLE IF EXISTS sale_detail;
References
For more information on replicating data between tables, see CLONE TABLE.
For more information on modifying and viewing table details, see Modify and View Tables.
For more information on managing partitions in an existing table, see Partition Operations.
For more information on managing columns in an existing table, see Column Operations.