All Products
Search
Document Center

AnalyticDB:Spark application performance diagnostics

Last Updated:Mar 01, 2024

AnalyticDB for MySQL Data Lakehouse Edition (V3.0) provides the Spark application performance diagnostics feature. If your Spark application has performance issues, you can use diagnostic information to quickly identify and analyze performance bottlenecks, optimize the Spark application, and improve troubleshooting efficiency. This topic describes how to perform Spark application performance diagnostics.

Prerequisites

  • An AnalyticDB for MySQL Data Lakehouse Edition (V3.0) cluster is created. For more information, see Create a cluster.

  • A job resource group that has at least 8 AnalyticDB compute units (ACUs) of reserved computing resources is created. For more information, see Create a resource group.

  • A Resource Access Management (RAM) user is granted the AliyunADBDeveloperAccess permission. For more information, see Manage RAM users and permissions.

  • A database account is created for the AnalyticDB for MySQL Data Lakehouse Edition (V3.0) cluster.

  • AnalyticDB for MySQL is authorized to assume the AliyunADBSparkProcessingDataRole role to access other cloud resources. For more information, see Perform authorization.

Scenarios

The Spark application performance diagnostics feature is suitable for the following scenarios:

  • Dataset performance analysis: You want to analyze the performance of a dataset when you use Spark to process large amounts of data. This feature helps you quickly identify performance bottlenecks, such as memory spikes and spills, to improve the efficiency of data processing.

  • Load balancing on large numbers of applications: Highly concurrent loads may cause performance issues for Spark applications, such as data skew, long tails, and load unbalancing. This feature helps you quickly identify issues and optimize Spark applications.

Limits

  • Only Spark applications that are successfully run in the last 14 days can be diagnosed.

  • Only batch and streaming applications can be diagnosed.

Procedure

  1. Log on to the AnalyticDB for MySQL console. In the upper-left corner of the console, select a region. In the left-side navigation pane, click Clusters. On the Data Lakehouse Edition (V3.0) tab, find the cluster that you want to manage and click the cluster ID.

  2. In the left-side navigation pane, choose Job Development > Spark JAR Development.

  3. In the Applications section, find the application that you want to diagnose and choose More > History in the Actions column.

  4. In the Execution History section, find the job that you want to diagnose and click Diagnose in the Actions column.

    Note

    After the diagnostics operation is complete, the Diagnostic Optimization Details panel appears. If your Spark application has performance issues, you can optimize the Spark application based on the diagnostic information.