This topic describes common errors that occur when you connect Hologres to MaxCompute and how to resolve them.
Basic concepts
Comparison between Hologres and MaxCompute.
Item
MaxCompute
Hologres
Scenarios
ETL, primarily for the Data Warehouse Detail (DWD) and Data Warehouse Service (DWS) layers.
Interactive queries and real-time data serving, primarily for the Application Data Service (ADS) layer.
User interaction
Asynchronous job execution.
Synchronous queries.
Cluster resources
Shared large cluster, delivered as Software as a Service (SaaS).
Dedicated cluster, delivered as Platform as a Service (PaaS).
Execution engine
Based on a Job Execution model. Jobs are executed in stages, and each stage requests computing resources on demand. Intermediate results are persisted to files.
Hologres operates on a Massively Parallel Processing (MPP) architecture with fine-grained memory management. It features a memory-resident execution engine and a user-space scheduler for SQL operators. Computation results are not written to disk.
Scheduling
Process-level scheduling. Resources are dynamically requested and allocated at runtime for each job execution.
Lightweight thread-level scheduling. Resources are reserved and pre-allocated at startup.
Extensibility
Virtually unlimited.
For complex queries, avoid data shuffling across multiple nodes.
Storage format
Columnar.
Row-based, columnar, and a hybrid row-columnar format.
Storage cost
Low cost, based on Pangu.
Relatively high cost, based on Pangu with SSDs for caching and acceleration.
Interface standard
MaxCompute SQL, similar to Hive.
PostgreSQL.
Use cases for Hologres foreign and internal tables
Foreign tables
Foreign tables do not store data; they access data directly from MaxCompute. Lacking indexes, they rely solely on CPU for computation. Therefore, foreign tables are best suited for querying small datasets with low QPS.
Internal table
Internal tables store data within Hologres. For data updates, complex queries, or high-QPS scenarios, we recommend importing data into internal tables. This allows you to fully leverage the underlying performance advantages of Hologres.
Performance tuning
Why did query performance drop after foreign table schema evolution?
Cause: By default, Hologres uses the HQE to accelerate queries on foreign tables. However, when a foreign table's schema evolves (e.g., due to MaxCompute schema changes), Hologres temporarily falls back to the SQE. This alternative access path is less efficient and degrades query performance.
Solution: First, use
hologres.hg_query_logto identify slow queries. Second, check theengine_typefield in the query logs. If it showsSQE, performance degradation is confirmed to be due to the fallback. Third, recreate the affected foreign table with the updated schema in Hologres.
Common errors
MaxCompute permission-related errors:
Other errors:
How to resolve the error "SERVER_INTERNAL_ERROR message: GetRecordBatch() is not implemented"?
How to fix the error "Query next from foreign table executor failed, not implemented"?
How to fix the error: "Open ORC file failed for schema mismatch. Reader schema"?
How to fix the error: "Request denied, may caused by server busy"?
How to fix the error: You have NO privilege 'MaxCompute:Select' on xxx?
How to fix the error: "The sensitive label of column 'xxx' is 2, but your effective label is 0"?
How to fix the error: "query next from foreign table executor failed validate userinfo"?
How to resolve the error "SERVER_INTERNAL_ERROR message: GetRecordBatch() is not implemented"?
You receive the following error when you query a foreign table: query next from foreign table executor failed, GetRecordBatch() is not implemented.
Cause: The MaxCompute table is a CFile-type table. Hologres does not currently support this table type.
Solution: Add the following GUC parameter before your SQL statement.
set hg_experimental_enable_access_odps_with_table_api =on;
How to resolve the error "not an ORC file"?
You receive the following error when you query a foreign table: ERROR: status { code: SERVER_INTERNAL_ERROR message: "hos_exception: Invalid argument: ../../../src/OrcFileMetaHelper.cc(234)-<operator()>: not an ORC file.
Cause: Hologres foreign tables cannot directly access MaxCompute tables that are being stream loaded.
Solution: Add the following GUC parameters before your SQL statement.
set hg_experimental_enable_access_odps_with_table_api=on; set hg_experimental_enable_access_odps_orc_via_holo =off;
How to fix the error "Specified partitions count in MaxCompute table: exceeds the limitation of 512, please add stricter partition filter or set axf_MaxCompute_partition_limit" or "Build desc failed: Exceeds the partition limitation of 512, current match xxx partitions"?
Cause:
By default, Hologres can query a maximum of 512 partitions from a foreign table per query. The current query exceeds this limit.
Solution:
Add a partition filter to your query to ensure it accesses no more than 512 partitions.
Import the data from MaxCompute into an internal table, which has no partition limit. For more information, see Import data from MaxCompute using SQL.
Use the following command to adjust the number of partitions a query can access. The default is 512, and the maximum is 1024. We do not recommend setting this value too high, as doing so can degrade query performance.
-- For V1.1 and later set hg_foreign_table_max_partition_limit = 128; -- For V0.10 set hg_experimental_foreign_table_max_partition_limit = xxx ;NoteFor MaxCompute tables with multi-level partitioning, the partition count is determined by the most granular partition unit.
How to fix the error "Build desc failed: Exceeds the scan limitation of 200 GB, current scan xxx GB"?
Cause:
By default, Hologres has a maximum underlying data scan limit of 200 GB per query. This limit applies to the data scanned from the selected MaxCompute partitions, not the total data stored in MaxCompute. The query failed because it exceeded this limit.
Solution:
Add filters to hit fewer partitions, keeping the scanned data size for a single query under 200 GB.
Import the data into Hologres before running the query. For more information, see Import data from MaxCompute using SQL.
(Not recommended) Use the
set hg_experimental_foreign_table_max_scan_size = xxx;command to increase the data volume limit. Replacexxxwith the required data size in GB, for example,400. However, increasing the foreign table data limit excessively can degrade performance and cause an OOM error, disrupting normal instance operations.
How to fix the error "Query next from foreign table executor failed, not implemented"?
Cause: The MaxCompute table data was loaded via the Streaming Tunnel using the
tunnel.createStreamUploadSessioncommand. To read this data, Hologres requires enabling two GUC parameters.Solution:
Add the following two configuration items.
NoteThese GUC parameters are supported in Hologres V1.3 and later versions.
set hg_experimental_enable_access_odps_with_table_api=on; set hg_experimental_enable_access_odps_orc_via_holo =off;If your instance version is earlier than V1.3, we recommend stopping the streaming load in MaxCompute and then merging the data using the following statements:
set odps.merge.task.mode=sql; set odps.merge.lock.expire.time=0; ALTER TABLE tablename [PARTITION] MERGE SMALLFILES;
How to resolve the error: Build desc failed: failed to check permission: Currently not supported table type "view"?
Cause: MaxCompute views are not currently supported.
How to fix the error "Build desc failed: failed to get foreign table split:MaxCompute-0010000: System internal error - get input pangu dir meta fail"?
Cause:
Hologres's configuration for reading from MaxCompute was not updated promptly.
Solution:
Wait a few minutes and retry. If the issue persists, contact technical support.
How to fix "ERROR: Query:[xxx] Build desc failed: failed to get foreign table split:ERPC_ERROR_CONNECTION_CLOSED"?
Cause:
The MaxCompute table has too many small files. This causes the metadata request to exceed the 1 GB maximum limit of the Remote Procedure Call (RPC) protocol.
Solution:
Run the following command to merge the small files.
set MaxCompute.merge.task.mode=sql; set MaxCompute.merge.lock.expire.time=0; ALTER TABLE <tablename> [PARTITION] MERGE SMALLFILES;Upgrade your instance to V0.10.21 and later, where this error has been fixed. For more information, see Instance upgrade.
Contact MaxCompute technical support to resolve the issue at the source. If the data volume is not large, you can import the data to Hologres.
How to fix: ERROR: status { code: SERVER_INTERNAL_ERROR message: "hos_exception: IO error: Failed to execute pangu open normal file, err: PanguParameterInvalidException" }?
Cause:
The HQE engine cannot directly access encrypted MaxCompute data stored on Pangu.
Solution:
Run the
ALTER DATABASE <dbname> SET hg_experimental_enable_access_odps_orc_via_holo = false;command to change the execution engine for foreign tables to SQE, which can access encrypted MaxCompute data. This configuration applies at the database level and takes effect on new connections. You can also set it at the session level:SET hg_experimental_enable_access_odps_orc_via_holo = false;.
How to fix the error: "failed to import foreign schema:Failed to get MaxCompute table:Not enable schema evolution"
Cause:
The metadata of the MaxCompute table has been modified.
Solution:
Upgrade your Hologres instance to V1.3 or later.
After you update the schema of a MaxCompute table (for example, by adding or deleting columns), run IMPORT FOREIGN SCHEMA to refresh the foreign table schema.
If the error persists after running
IMPORT FOREIGN SCHEMA, recreate the MaxCompute table and then the foreign table. This is because after a schema modification, MaxCompute enters aschema evolutionstatus, and Hologres cannot read tables in this status.
How to fix the error: "Open ORC file failed for schema mismatch. Reader schema"?
Cause:
The MaxCompute table is in ORC format, and the storage method for its
DECIMALtype has changed. This typically happens when a newDECIMALfield is added to the table or a canary configuration change is made in MaxCompute. This change causes an error when Hologres reads theDECIMALtype.Solution:
Run the
set MaxCompute.storage.orc.enable.binary.decimal=falsecommand and re-import the MaxCompute data.Change the
DECIMALtype in the MaxCompute table toDOUBLEand refresh the data again.
How to fix the error: "failed to import foreign schema:Failed to get MaxCompute table:Not enable acid table"?
Cause:
The MaxCompute table is a transactional table (ACID).
Solution:
Transactional tables are not supported. Convert them to standard tables.
How to fix the error: "Request denied, may caused by server busy"?
Cause:
Resources for querying foreign tables are exhausted, and CPU usage has been exceeded.
Solutions:
Optimize your SQL to use resources more efficiently. For more information, see Optimize the performance of querying MaxCompute tables in Hologres.
Reduce the Degree of Parallelism (DOP).
Run the
show hg_foreign_table_executor_max_dop;command to view the current configuration.Use the following command to reduce the DOP. We recommend setting it to half of the current value.
-- Syntax example set hg_foreign_table_executor_max_dop = <concurrency>; -- Usage example set hg_foreign_table_executor_max_dop = 18;DOP controls the concurrency for reading data from a foreign table at a single execution node. The default is 256, with a range of 0 to 1024. Risk: Setting the DOP too high can cause an OOM error, leading to import or query failures, or even instance restarts and service unavailability. Setting it too low can degrade the performance of foreign table queries and imports.
Import the data into an internal table. You can create an index on an internal table for better query performance. For more information, see Import data from MaxCompute using SQL.
How to fix the error during data import: "Query executor exceeded total memory limitation xxxxx: yyyy bytes used"?
Cause:
The query exceeded the memory limit, either excessively large data volume or complex import logic. An instance consists of multiple nodes, and each node has a standard memory limit of 64 GB. This memory is divided into three parts: one-third for computation, one-third for caching, and one-third for metadata. This error indicates that the computation memory limit was exceeded.
Solution:
Check the execution plan
Run the
explain analyze sql;command to view the specific number of rows in the execution plan. If an import query involves subqueries, but some tables have not been analyzed withanalyze, or ifanalyzewas run but the data has since been updated, the statistics may be inaccurate. This can cause the query optimizer to choose a suboptimal join order, leading to high memory consumption.Run the
analyze tablename;command to update statistical metadata for all involved internal and foreign tables. This helps the query optimizer generate a more efficient execution plan.Set the number of rows per import
When a table has many columns or large rows, the amount of data read in a single batch increases. You can add the following parameter before your SQL statement to control the number of rows read per batch, which can help prevent OOM errors.
set hg_experimental_query_batch_size = 1024; -- The default value is 8192. insert into holo_table select * from mc_table;Reduce the import DOP.
Reducing the import DOP also effectively decreases memory usage during the import process. The DOP is controlled by the
hg_foreign_table_executor_max_dopparameter, which defaults to the number of CUs in the instance. You can set a smaller value during import to lower memory usage.set hg_foreign_table_executor_max_dop = 8; insert into holo_table select * from mc_table;Check for excessive duplicate data in the foreign table
If the import still fails after trying the steps above and you are using an
insert on conflictcommand, check if the foreign table contains too much duplicate data. Excessive duplicates can also degrade import performance. Perform data deduplication in MaxCompute before importing. For more information, see Merge multiple rows of data into one row.Upgrade to a new version with dynamic memory adjustment
Starting from V1.1.24, Hologres dynamically adjusts memory. The backend refreshes the current memory level in real time and allocates more memory for computation if it is available. Please upgrade Hologres to the latest version. For more information, see Instance upgrade.
Increase instance resources
If all other solutions fail, scale up your Hologres instance. For more information, see Upgrade.
How to fix the error: "Timestamp overflow detected while converting timestamp from orc VectorBatch to arrow"?
Cause:
The MaxCompute table contains a
TIMESTAMPtype. After data load via Tunnel, the precision of theTIMESTAMPbecomes nanoseconds. Hologres does not currently support nanosecond precision for timestamps.Solution:
In MaxCompute, convert the
TIMESTAMPtype to aDATETIMEtype.Upgrade your Hologres instance to V1.1.70 or later.
How to fix the error: You have NO privilege 'MaxCompute:Select' on xxx?
Cause:
The current account does not have
SELECTprivilege on the MaxCompute table.Solution:
Contact a MaxCompute administrator to grant
SELECTprivilege on the table to the current account. For more information, see MaxCompute permissions.
How to fix the error: "The sensitive label of column 'xxx' is 2, but your effective label is 0"?
Cause:
The current account has permissions on only some of the fields in the MaxCompute table.
Solution:
Ensure your account has permissions for all columns accessed in your query. If permissions are missing, request them or modify your query to include only fields you are authorized to access.
If you have the necessary permissions on accessed columns, this error may be due to a defect in an older instance version. Add the following commands before your query to resolve the issue.
set hg_experimental_enable_MaxCompute_executor=on; set hg_experimental_enable_query_master=on;
How to fix the error: "query next from foreign table executor failed validate userinfo"?
Cause:
Hologres was not granted the
AliyunHologresEncryptionDefaultRolerole. This error can also occur intermittently due to caching issues, even if the authorization was granted less than three hours ago.Solution:
Grant the
AliyunHologresEncryptionDefaultRolePolicypolicy to the account. For more information, see Query encrypted MaxCompute data.
How to improve query performance on a foreign table?
The most effective way to resolve slow queries is to optimize your SQL statements. For more information, see Optimize the performance of querying MaxCompute tables in Hologres.
Why am I getting a "You have NO privilege 'odps:Select' on xxx" error when querying a foreign table?
Cause
This error indicates that the user performing the query does not have
SELECTpermissions on the MaxCompute table.Solution
To fix this, ask a MaxCompute administrator to grant the
SELECTprivilege on the specified table to the account you are using. For more information, see MaxCompute permissions.
Why do I get "The sensitive label of column 'xxx' is 2, but your effective label is 0" when querying a foreign table?
Cause
This error occurs because the user account has permission to access only a subset of columns in the MaxCompute table, and older versions of Hologres (before V0.8) did not fully support this scenario.
Solutions
Here are the recommended solutions:
(Recommended) Upgrade your Hologres instance to V0.8 or later. These versions properly handle column-level permissions.
As a temporary workaround on older instances, set the following GUC parameters before your query:
set hg_experimental_enable_odps_executor=on; set hg_experimental_enable_query_master=on;Alternatively, request
SELECTpermissions for all columns in the MaxCompute table. For more information, see MaxCompute permissions.
How do I fix the "You have NO privilege 'odps:Select' on xxx" error when querying a MaxCompute table from a different project, even with permissions?
Cause
This error can happen during cross-project access if the MaxCompute project uses package-based access control. Hologres needs to be explicitly told which project context to use for the authorization.
Solution
To resolve this, set the current project name before your query. The command depends on your Hologres version:
-- For Hologres V0.7+: set seahawks.seahawks_internal_current_odps_project='holoprojectname'; -- For Hologres V0.8+: set hg_experimental_odps_current_project_name = 'holoprojectname';
Why do I get "You have NO privilege 'odps:List' on xxx" when creating a foreign table using HoloWeb or DataStudio?
Cause
This error occurs because creating a foreign table through HoloWeb or DataStudio requires the
Listpermission in MaxCompute to display available tables. Your account lacks this permission.Solutions
Ask a MaxCompute administrator to grant the
Listpermission to your account. For more information, see MaxCompute permissions.Create the foreign table directly using a
CREATE FOREIGN TABLESQL statement, which does not requireListpermission. For more information, see Accelerate queries of MaxCompute data based on foreign tables.
How do I resolve an "Access denied by project ip white list" error when creating a foreign table on HoloWeb?
Cause
This error means the target MaxCompute project has an IP allowlist enabled, and the IP address of the HoloWeb server (
sourceIPin the error message) is not on that list.Solution
To fix this, add the
sourceIPfrom the error message to the IP allowlist of the target MaxCompute project. For more information, see .
What does the error "You don't exist in project xxx" mean when creating a foreign table for MaxCompute?
Cause
This error indicates that the account used for creating the foreign table has not been added as a member to the specified MaxCompute project.
Solution
Verify that the project name (
xxxin the error) is correct. If the name is correct, ask a MaxCompute administrator to add your account to the project. For more information, see Permission overview.