All Products
Search
Document Center

Serverless App Engine:Best practices for JVM heap size configuration

Last Updated:Sep 03, 2024

The following situations may occur if the Java virtual machine (JVM) heap occupies too much memory: If the JVM is running in a Linux OS, the Java process may be killed by the Linux out of memory (OOM) Killer. If the JVM is running in a Docker container, the container instance may be frequently restarted. This topic provides recommendations on memory configurations for the JVM running in a container environment and provides FAQ about OOM.

(Recommended) Use the -XX:MaxRAMPercentage option to specify the maximum percentage of the container memory used by the JVM

We recommend the following JVM options in a container environment:

-XX:+UseContainerSupport -XX:InitialRAMPercentage=70.0 -XX:MaxRAMPercentage=70.0 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/home/admin/nas/gc-${POD_IP}-$(date '+%s').log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/admin/nas/dump-${POD_IP}-$(date '+%s').hprof

The following table describes the JVM options.

Option

Description

-XX:+UseContainerSupport

The option to detect the memory size and the number of processors of the container in which the JVM is located, rather than detecting the entire OS.

The JVM uses the preceding detected information for resource allocation. For example, the percentages set for -XX:InitialRAMPercentage and -XX:MaxRAMPercentage are calculated based on the detected information.

-XX:InitialRAMPercentage

The option to specify the initial percentage of the container memory that can be used by the JVM. We recommend that you set this option and the -XX:MaxRAMPercentage option to the same value. The recommended value is 70.0, which indicates that the JVM initially uses 70% of the container memory.

-XX:MaxRAMPercentage

The option to specify the maximum percentage of the container memory that can be used by the JVM. Due to overheads needed by system components, we recommend that you set this option to a value less than or equal to 75.0. The recommended value is 70.0, which indicates that a maximum of 70% of the container memory can be used by the JVM.

-XX:+PrintGCDetails

The option to print GC details.

-XX:+PrintGCDateStamps

The option to print GC timestamps. Sample timestamp: 2019-12-24T21:53:59.234+0800

-Xloggc:/home/admin/nas/gc-${POD_IP}-$(date '+%s').log

The option to specify the output path for GC logs. Before you configure this option, make sure that you have a container directory to store logs. We recommend that you mount the container directory to an NAS directory or use Simple Log Service to store logs. This way, directories can be automatically created, and persistent storage is enabled for logs.

-XX:+HeapDumpOnOutOfMemoryError

The option to enable automatic generation of dump files when an OOM error occurs in the JVM.

-XX:HeapDumpPath=/home/admin/nas/dump-${POD_IP}-$(date '+%s').hprof

The option to specify the path in which dump files are stored. Before you configure this option, make sure that you have a container directory for dump files. We recommend that you mount the container directory to an NAS directory. This way, directories can be automatically created, and persistent storage is enabled for logs.

Note
  • The -XX:+UseContainerSupport option takes effect only if you use JDK 8u191+ and JDK 10 or later.

  • The -XX:+UseContainerSupport option is supported only in some OSs. For more information, see the official documentation of your Java version.

  • In JDK 11 and later versions, the following log options have been deprecated: -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, and -Xloggc:$LOG_PATH/gc.log. We recommend that you use the -Xlog:gc:$LOG_PATH/gc.log option.

  • Dragonwell 11 does not support the ${POD_IP} variable.

  • If you do not mount the /home/admin/nas container directory to an NAS directory, make sure that the container directory exists before the application startup. Otherwise, no logs are generated.

Use the -Xms and -Xmx options to manage the heap size

  • You can use the -Xms and -Xmx options to manage the heap size. Take note of the following items:

    • You must reconfigure the -Xmx option after you adjust the instance specifications.

    • If the parameter settings are not appropriate, the container may be forcibly shut down due to OOM errors even though the application memory does not reach the upper limit of the JVM heap size. For more information, see What do I do if the 137 exit code appears in my container?

  • We recommend that you configure the following JVM options:

    -Xms2048m -Xmx2048m -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/home/admin/nas/gc-${POD_IP}-$(date '+%s').log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/admin/nas/dump-${POD_IP}-$(date '+%s').hprof

    The following table describes the JVM options.

    Option

    Description

    -Xms

    The option to specify the initial heap size of the JVM. To prevent reallocation of heap size after each garbage collection, we recommend that you set the -Xms and -Xmx options to the same value.

    -Xmx

    The option to specify the maximum heap size. To prevent OOM errors in containers, we recommend that you reserve a memory size that is sufficient to run the system.

    -XX:+PrintGCDetails

    The option to print GC details.

    -XX:+PrintGCDateStamps

    The option to print GC timestamps. Sample timestamp: 2019-12-24T21:53:59.234+0800

    -Xloggc:/home/admin/nas/gc-${POD_IP}-$(date '+%s').log

    The option to specify the output path for GC logs. Before you configure this option, make sure that you have a container directory to store logs. We recommend that you mount the container directory to an NAS directory or use Simple Log Service to store logs. This way, directories can be automatically created, and persistent storage is enabled for logs.

    -XX:+HeapDumpOnOutOfMemoryError

    The option to enable automatic generation of dump files when an OOM error occurs in the JVM.

    -XX:HeapDumpPath=/home/admin/nas/dump-${POD_IP}-$(date '+%s').hprof

    The option to specify the path in which dump files are stored. Before you configure this option, make sure that you have a container directory for dump files. We recommend that you mount the container directory to an NAS directory. This way, directories can be automatically created, and persistent storage is enabled for logs.

  • The following table describes some recommended heap size settings.

    Memory size

    JVM heap size

    1 GB

    600 MB

    2 GB

    1434 MB

    4 GB

    2867 MB

    8 GB

    5734 MB

Note
  • In JDK 11 and later versions, the following log options have been deprecated: -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, and -Xloggc:$LOG_PATH/gc.log. We recommend that you use the -Xlog:gc:$LOG_PATH/gc.log option.

  • Dragonwell 11 does not support the ${POD_IP} variable.

  • If you do not mount the /home/admin/nas container directory to an NAS directory, make sure that the container directory exists before the application startup. Otherwise, no logs are generated.

Use ossutil to download dump files

  1. Mount the container directory to an NAS directory. For more information, see Configure NAS storage.

  2. Configure JVM options.

    The /home/admin/nas container directory is mounted on an NAS directory and stores dump files.

    -Xms2048m -Xmx2048m -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/home/admin/nas/gc-${POD_IP}-$(date '+%s').log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/admin/nas/dump-${POD_IP}-$(date '+%s').hprof
  3. If OOM errors occur in your application, dump files are generated and stored in the /home/admin/nas directory. You can use the ossutil tool to download the dump files to your on-premises machine and analyze the dump files. For more information, see Upload and download logs to check the health status of applications.

FAQ

What do I do if the 137 exit code appears in my container?

If the used container memory reaches the upper limit, OOM errors may occur in your container. This may cause the container to be forcibly shut down. In this case, if the upper limit of the JVM heap size is not reached, no dump files are generated. To reserve sufficient memory to run system components, we recommend that you decrease the upper limit of the JVM heap size.m_exitcode_137

What do I do if an OOM occurs and no dump files are generated?

If the OOM Killer mechanism is triggered, OOM errors may not occur in the JVM. In this case, no dump files are generated. You can use the following methods to prevent the issue:

  • For Java applications, you can decrease the JVM heap size. For more information, see the related instructions in this topic.

  • For non-Java applications, you can adjust the instance specifications to ensure sufficient memory resources. For more information, see Change the instance specifications of an application.

Can I set the heap size and the memory size to the same value?

No, you cannot set the heap size and the memory size to the same value because of system component overheads. For example, if you use Simple Log Service to collect logs, a small amount of memory is consumed. You must reserve sufficient memory for system components. For more information, see Configure log collection to Simple Log Service.

What do I do if an error occurs when I set the XX:MaxRAMPercentage option to an integer in JDK 8?

This is a bug specific to JDK 8. For more information, see Java Bug Database. Sample scenario: If you set the XX:MaxRAMPercentage option to 70 in JDK 8u191, an error may occur when JVM starts up.m_JDK8_bug

Solutions:

  • Solution 1: Set the -XX:MaxRAMPercentage option to 70.0.

    Note

    If you use the -XX:InitialRAMPercentage or -XX:MinRAMPercentage option, you cannot set the values to integers. Follow Solution 1 to configure the options.

  • Solution 2: Upgrade JDK to version 10 or later.

Why is the memory usage low when I set the JVM heap size to 6 GB?

If you configure the -Xms6g -Xmx6g option, the OS does not immediately allocate 6 GB of physical memory and allocates the physical memory only after the memory is actually used. The memory usage is relatively low when the application is started and increases later.