Alibaba Cloud E-MapReduce (EMR) Serverless Spark provides default global Spark configurations to run and manage various types of jobs. The default configurations contain all the configuration information that is required to run specific jobs. This ensures that the configurations and the runtime environment used to submit and run jobs are the same.
Prerequisites
A workspace is created. For more information, see Manage workspaces.
Configure parameters
In the left-side navigation pane of the EMR Serverless Spark page, click Configurations to view or modify the related parameters.
Parameter | Description |
Engine Version | The version of the engine that is used by the compute. For more information about engine versions, see Engine versions. |
spark.driver.cores | The number of CPU cores that are used by the driver of the Spark application. |
spark.driver.memory | The size of memory that is available to the driver of the Spark application. |
spark.executor.cores | The number of CPU cores that can be used by each executor. |
spark.executor.memory | The size of memory that is available to each executor. |
spark.executor.instances | The number of executors that are allocated to the Spark application. |
Dynamic Resource Allocation | By default, this feature is disabled. After you enable this feature, you must configure the following parameters:
|
More Memory Configurations |
|
Spark Configuration | The Spark configurations. Separate the configurations with spaces, such as |