You can build a JAR package that contains business logic and upload the JAR package for the development of Spark JAR jobs in a convenient manner. This topic provides an example on how to develop and deploy a JAR batch job.
Prerequisites
A workspace is created. For more information, see Manage workspaces.
A business application is developed and a JAR package is created for the application.
Procedure
Step 1: Develop a JAR package
E-MapReduce (EMR) Serverless Spark does not provide an integrated environment for developing JAR packages. Therefore, you must write the code of a Spark application and package the code into a JAR file on an on-premises or a standalone development platform. This topic provides a test JAR package to help you quickly get started with JAR batch jobs. You can download the test package for use in subsequent steps.
Click spark-examples_2.12-3.3.1.jar to download the test JAR package.
The JAR package is a simple example provided by Spark. It is used to calculate the value of pi (π).
Step 2: Upload the JAR package
Go to the Files page.
Log on to the EMR console.
In the left-side navigation pane, choose
.On the Spark page, find the desired workspace and click the name of the workspace.
In the left-side navigation pane of the EMR Serverless Spark page, click Files.
On the Files page, click Upload File.
In the Upload File dialog box, click the area in a dotted-line rectangle to select a JAR package or directly drag a JAR package to the area.
In this example, the spark-examples_2.12-3.3.1.jar package is uploaded.
Step 3: Develop and run a job
In the left-side navigation pane of the EMR Serverless Spark page, click Data Development.
On the Development tab, click Create.
In the Create dialog box, configure the Name parameter, choose
from the Type drop-down list, and then click OK.In the upper-right corner of the configuration tab of the job, select a queue for the job.
For information about how to add a queue, see Manage resource queues.
Configure the parameters that are described in the following table and click Run. You do not need to configure other parameters.
Parameter
Description
Main JAR Resource
Select the JAR package that you uploaded in the previous step. In this example, the spark-examples_2.12-3.3.1.jar package is selected.
Main Class
Enter the main class that is specified when a Spark job is submitted. In this example, enter
org.apache.spark.examples.SparkPi
.After you run the job, click Details in the Actions column of the job on the Execution Records tab.
On the Development Job Runs tab of the Job History page, view related logs.
Step 4: Publish the job
A published job can be run on a workflow node.
Confirm that the job runs as expected. Then, click Publish in the upper-right corner of the configuration tab of the job.
In the Publish dialog box, configure the Remarks parameter and click OK.
(Optional) Step 5: View job information on the Spark UI
After the job runs as expected, you can view the running details of the job on the Spark UI.
In the left-side navigation pane of the EMR Serverless Spark page, click Job History.
On the Job History page, click the Development Job Runs tab.
On the Development Job Runs tab, find the desired job and click Details in the Actions column.
On the Overview tab, click Spark UI in the Spark UI field.
On the Spark Jobs page, view the running details of the job.
References
After a job is published, you can schedule the job in workflows. For more information, see Manage workflows. For information about a complete job development and orchestration process, see Get started with the development of Spark SQL jobs.