This topic provides answers to some frequently asked questions (FAQ) about data lake management in Data Lake Analytics (DLA).
FAQ related to lakehouses
FAQ related to metadata discovery
FAQ related to data warehousing based on multi-database merging and one-click data warehousing
FAQ related to lakehouses
What is a lakehouse?
A lakehouse is a data warehouse that is built on top of a data lake. It is a new big data paradigm that is used to resolve various issues that you may encounter when you use data lakes. For example, data lakes do not support upsert operations, multi-version concurrency control (MVCC), and extract, transform, load (ETL) operations for incremental data. When you use data lakes, a large number of small files are generated, and files in data lakes are not suitable for data analytics. In addition, metadata in DLA is inconsistent with that in data sources, some schemas are not supported, and no appropriate data indexes exist in data lakes.
Lakehouses provide upper-layer extensibility for data lake storage systems, such as Object Storage Service (OSS). Lakehouses allow you to write files of different formats to data lakes. The formats are categorized into object management formats and object formats. Object management formats include Apache Hudi and Delta Lake, and object formats include Parquet and ORC. When you write data to data lakes, operations, such as upsert operations, small file merging, MVCC, and snapshot reading are supported. In addition, lakehouses maintain large amounts of table metadata in databases and provide a unified data access view for the upper-layer SQL and analytical engines. Lakehouses use features of data warehouses to provide capabilities that data lakes are unable to provide.
Does data ingestion into a lakehouse cause heavy loads on ApsaraDB RDS data sources? How do I perform throttling for data warehousing?
If you use real-time data ingestion, the lakehouse obtains data by using Data Transmission Service (DTS) and uses binary log files to write data to data lakes. This does not cause heavy loads on ApsaraDB RDS data sources.
If you use data ingestion for data warehousing, ApsaraDB RDS data sources may be overloaded. In this case, you can configure related parameters to control the loads.
For more information, see Advanced options.
The lakehouse workload fails to run and no Spark logs can be queried. Why?
In the workload details dialog box, if the "specified driver resource spec:[xlarge]+executor resource spec:[xlarge] * executor instances:[2] exceed the max CPU:[16.0] or max Memory:[64.0], errorCode=InvalidParameter" error message or a similar error message is displayed for the job status, the number of Spark compute units (CUs) that you configured when you create a workload exceeds the maximum number of CUs that can be deployed in a virtual cluster.
Solutions:
Adjust the maximum number of Spark CUs that can be deployed in a virtual cluster.
Adjust the number of Spark CUs that you configured for the workload to ensure that this number is less than the maximum number of CUs that can be deployed in a virtual cluster.
FAQ related to metadata discovery
After a metadata discovery task is configured and manually triggered, the incremental data that is synchronized after the task is run cannot be queried. Why?
Cause: After you manually trigger the metadata discovery task, only the existing data can be queried. The incremental data that is synchronized after you run the task cannot be queried.
Solution: Change the trigger mode of the metadata discovery task from manual to automatic and specify the trigger time. After the change, the incremental data that is synchronized after the task is run can be queried.
What are the differences between the data warehouse mode and free mode of OSS? In which scenarios can I apply the two modes?
OSS configuration mode | Scenario | OSS path format | Discovery accuracy | Performance |
Data warehouse mode | Users directly upload data to OSS and plan to create a standard data warehouse that can be used to compute and analyze data. | Database name/Table name/File name or Database name/Table name/Partition name/.../Partition name/File name | High | High |
Free mode | OSS data already exists, but the path of the OSS data is not configured in a specified format. Users plan to create databases, tables, and partitions that can be analyzed. | No requirements | Medium | Medium |
Why is the CSV file exported from Excel not discovered?
Metadata discovery tasks support only CSV files. Make sure that the file exported from Excel is a CSV file.
A metadata discovery task samples files to identify the schema of a CSV file and reads the first 1,000 rows of data in the file to check whether the fields and delimiters of these rows are the same. If they are the same, the metadata discovery task considers that the file is a CSV file.
The JSON files that are stored in a directory have the same schema but no tables are created. Why?
Metadata discovery tasks can discover the directories that contain only files. If a directory contains both files and subdirectories, this directory cannot be discovered.
When a metadata discovery task is triggered to discover all data that is shipped from Simple Log Service to OSS, the discovery result shows that some Logstores of Simple Log Service fail to generate tables. Why?
Check whether the Logstores that fail to generate tables are configured to ship data to OSS.
The "Make sure the bucket you specified be in the same region with DLA." error message appears when a Simple Log Service metadata discovery task is running. Why?
Cause: Simple Log Service ships data to an OSS bucket that is not deployed in the same region as DLA. As a result, DLA cannot access the OSS bucket. Therefore, when a Simple Log Service metadata discovery task is triggered, the following error message may appear: "OSS object [oss://test-bucket/] is invalid. Make sure the bucket you specified be in the same region with DLA. Detailed message: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."
Solution: Use an OSS metadata discovery task instead of the Simple Log Service metadata discovery task in the region of OSS to which Simple Log Service data is shipped. This way, you can query the data that is shipped from Simple Log Service when a table is automatically created in DLA.
The OSS metadata discovery task does not discover tables from the CSV files that are stored in OSS. Why?
Causes:
The size of a field value exceeds 4,096 bytes, which interrupts file sampling.
An OSS metadata discovery task infers different schemas based on the sampled files in the same directory. As a result, the task cannot determine the schema that can be used.
The file format is not CSV.
After a CSV file is stored in OSS, the data type of a field discovered by the OSS metadata discovery task is not the same as the data type of the field in the CSV file. Why?
Causes:
This field has a null value. The task infers that the data type of the field is STRING based on the file sampling result.
A field in the data of the CSV file is not of the STRING type.
The format of the delimiter in the CSV file is incorrect. As a result, the OSS metadata discovery task incorrectly considers that the data types of all the fields in the CSV file are STRING.
In a CSV file, the first row is a description row, the second row is a header row, and the third row is a data row. Can an OSS metadata discovery task discover this CSV file?
No, an OSS metadata discovery task cannot discover this CSV file.
OSS metadata discovery tasks can discover a CSV file only if the file meets one of the following conditions:
The file contains only data rows and does not contain header rows.
The first row is a header row.
CSV files that do not meet the preceding conditions cannot be discovered.
FAQ related to data warehousing based on multi-database merging and one-click data warehousing
What do I do if the "Access denied for user" error message appears when I connect DLA to a database during one-click data warehousing?
Cause: The username or password that is used to log on to the database is incorrect.
Solution: Check whether the username or password that is used to log on to the database is changed. If the username or password is changed, log on to the DLA console. In the left-side navigation pane, click Metadata management. On the Metadata management page, find your one-click data warehousing task and click Library table details in the Actions column. On the page that appears, click Configuration and then Update to modify the configurations of the task.
What do I do if the "because Could not create connection to database server. Attempted reconnect 3 times. Giving up" error message appears when I connect DLA to a database during one-click data warehousing?
Cause: The source ApsaraDB RDS database is overloaded.
Solutions:
What do I do if the "because Encountered too many errors talking to a worker node. The node may have crashed or be under too much load. This is probably a transient issue, so please retry your query in a few minutes." error message appears when I connect DLA to a database during one-click data warehousing?
Cause: The overall load of DLA Public Edition is excessively high at that time.
Solution: Click Execute again to retry the one-click data warehousing task. If this issue persists, contact DLA technical support personnel.
What do I do if the "No such instance with ID: rm-xxxxxx" error message appears when I create a data warehouse by merging databases or run a one-click data warehousing?
This error message indicates that the ApsaraDB RDS instance rm-xxxxxx involved in the current data warehousing task is released.
What do I do if the "because Communications link failure" error message appears when I run a one-click data warehousing task?
The following error message appears: The last packet successfully received from the server was 900,120 milliseconds ago. The last packet sent successfully to the server was 900,120 milliseconds ago.), 23 out of 110 jobs are finished.
Causes:
The database has views.
The source ApsaraDB RDS database is overloaded. As a result, the ApsaraDB RDS database fails to restart or does not respond.
Solution: Delete the views from the database or reduce the parallelism.
Which regions support Data Lake Formation?
All regions in mainland China support Data Lake Formation. The following table describes the regions outside mainland China that support Data Lake Formation.
Regions | Support Data Lake Formation |
China (Hong Kong) | Yes |
Japan (Tokyo) | Yes |
Singapore | Yes |
US (Silicon Valley) | Yes |
US (Virginia) | Yes |
UK (London) | Yes |
Germany (Frankfurt) | Yes |
Malaysia (Kuala Lumpur) | Yes |
The one-click data warehousing task is successful, but some tables are not synchronized. Why?
Some tables may be skipped when the task is running. You can check the number of tables that are skipped based on the status of the task. You can also check these tables in extraWarnTips of the task details. For example, if you run a one-click data warehousing task, the tables whose names and field names are in Chinese are skipped.
If the names and field names of the table that are skipped are not in Chinese, you can check whether the Java Database Connectivity (JDBC) account has permissions to access these tables.
What do I do if the one-click data warehousing task fails to run and the "mppErrorCode=30101, because Direct buffer memory..." error message appears?
Cause: The public cluster of the serverless Presto engine is overloaded.
Solution: Manually retry the one-click data warehousing task. If this issue persists, contact us for technical support.
What do I do if the one-click data warehousing task fails to run and the "because Failed to rename <OSS Path>" error message appears?
Cause: The storage class of the OSS bucket that you configure when you create a data warehouse is Archive. As a result, DLA fails to rename the OSS bucket.
Solution: Create another data warehousing task and configure an OSS bucket of a non-Archive storage class.
What do I do if the one-click data warehousing task fails to run for ApsaraDB for MongoDB and the "Cannot allocate slice larger than XXXXX bytes" error message appears?
Detailed error message:
because Error reading from mongodb: Cannot allocate slice larger than 2147483639 bytes.
Cause: The size of some fields in the data that the serverless Presto engine of DLA reads from ApsaraDB for MongoDB is excessively large.
Solution: Modify the advanced configurations of the one-click data warehousing task and use sensitive-columns to filter out the fields with large sizes. For more information, see Advanced options.
Why does the one-click data warehousing task fail after it runs for a dozen hours?
Causes:
Thousands of tables are stored in the database, which may cause a synchronization timeout.
The tables that encounter the synchronization timeout issue do not contain index fields or UNIQUE KEY fields of integer data types. As a result, the split calculation task that runs on the serverless Presto engine times out.
Solution: Set the connections-per-job parameter to 1. For more information, see Advanced options.