All Products
Search
Document Center

:EMR events

Last Updated:Sep 04, 2024

This topic describes the types of E-MapReduce (EMR) events that ActionTrail records or CloudMonitor monitors and can be published to EventBridge.

Event types

The following table describes the types of EMR events that can be published to EventBridge.

Event type

Value of the type parameter

Operation performed by Alibaba Cloud on a resource

emr:ActionTrail:AliyunServiceEvent

API operation call

emr:ActionTrail:ApiCall

Operation performed in a console

emr:ActionTrail:ConsoleOperation

The heartbeat message of the ECmanaged agent expires

emr:CloudMonitor:Agent[EcmAgentHeartbeatExpired]

The ECmanaged agent remains disconnected for a long time

emr:CloudMonitor:Agent[Maintenance.EcmAgentTimeout]

A workflow is complete

emr:CloudMonitor:EMR-110401002

A workflow is submitted

emr:CloudMonitor:EMR-110401003

A job is submitted

emr:CloudMonitor:EMR-110401004

A workflow node is started

emr:CloudMonitor:EMR-110401005

The status of a workflow node is checked

emr:CloudMonitor:EMR-110401006

A workflow node is complete

emr:CloudMonitor:EMR-110401007

A workflow node is stopped

emr:CloudMonitor:EMR-110401008

A workflow node is canceled

emr:CloudMonitor:EMR-110401009

A workflow is canceled

emr:CloudMonitor:EMR-110401010

A workflow is restarted

emr:CloudMonitor:EMR-110401011

A workflow is resumed

emr:CloudMonitor:EMR-110401012

A workflow is paused

emr:CloudMonitor:EMR-110401013

A workflow is stopped

emr:CloudMonitor:EMR-110401014

A workflow node fails

emr:CloudMonitor:EMR-110401015

A job fails

emr:CloudMonitor:EMR-110401016

A workflow fails

emr:CloudMonitor:EMR-210401001

The startup of a workflow node times out

emr:CloudMonitor:EMR-210401003

The startup of a job times out

emr:CloudMonitor:EMR-210401004

The status of the Airflow scheduler fails to be checked

emr:CloudMonitor:Maintenance[AIRFLOW.Scheduler.StatusCheck.Fail]

The status of the Airflow web server fails to be checked

emr:CloudMonitor:Maintenance[AIRFLOW.WebServer.Check.Fail]

The service status of the Airflow web server fails to be checked

emr:CloudMonitor:Maintenance[AIRFLOW.WebServer.StatusCheck.Fail]

The status of ApacheDS fails to be checked

emr:CloudMonitor:Maintenance[APACHEDS.StatusCheck.Fail]

The status of the ClickHouse server fails to be checked

emr:CloudMonitor:Maintenance[CLICKHOUSE.ServerStatusCheck.Fail]

Garbage collection (GC) for a Druid broker fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Broker.GcCheck.Fail]

The status of a Druid broker fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Broker.StatusCheck.Fail]

GC for a Druid coordinator fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Coordinator.GcCheck.Fail]

The status of a Druid coordinator fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Coordinator.StatusCheck.Fail]

GC for a Druid historical node fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Historical.GcCheck.Fail]

The status of a Druid historical node fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Historical.StatusCheck.Fail]

GC for a Druid middle manager fails to be checked

emr:CloudMonitor:Maintenance[DRUID.MiddleManager.GcCheck.Fail]

The status of a Druid middle manager fails to be checked

emr:CloudMonitor:Maintenance[DRUID.MiddleManager.StatusCheck.Fail]

GC for a Druid overlord fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Overlord.GcCheck.Fail]

The status of a Druid overlord fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Overlord.StatusCheck.Fail]

GC for a Druid router fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Router.GcCheck.Fail]

The status of a Druid router fails to be checked

emr:CloudMonitor:Maintenance[DRUID.Router.StatusCheck.Fail]

GC for a Flink history server fails to be checked

emr:CloudMonitor:Maintenance[FLINK.HistoryServer.GcCheckP0.Fail]

The status of a Flink history server fails to be checked

emr:CloudMonitor:Maintenance[FLINK.HistoryServer.StatusCheck.Fail]

The status of a Flink Ververica Platform (VVP) server fails to be checked

emr:CloudMonitor:Maintenance[FLINK.VVP.StatusCheck.Fail]

The status of the HAS administrator fails to be checked

emr:CloudMonitor:Maintenance[HAS.Admin.StatusCheck.Fail]

The status of HAS fails to be checked

emr:CloudMonitor:Maintenance[HAS.Server.StatusCheck.Fail]

The availability of an HBase cluster fails to be checked

emr:CloudMonitor:Maintenance[HBASE.AvailabilityStatusCheck.Fail]

The inter-process communication (IPC) port of the HBase HMaster is unavailable

emr:CloudMonitor:Maintenance[HBASE.HMaster.IpcPortUnAvailable]

The status of the HBase HMaster fails to be checked

emr:CloudMonitor:Maintenance[HBASE.HMaster.StatusCheck.Fail]

The IPC port of an HBase HRegionServer is unavailable

emr:CloudMonitor:Maintenance[HBASE.HRegionServer.IpcPortUnAvailable]

GC for an HBase RegionServer fails to be checked

emr:CloudMonitor:Maintenance[HBASE.RegionServer.GcCheckP0.Fail]

The status of an HBase RegionServer fails to be checked

emr:CloudMonitor:Maintenance[HBASE.RegionServer.StatusCheck.Fail]

GC for an HBase Thrift server fails to be checked

emr:CloudMonitor:Maintenance[HBASE.ThriftServer.GcCheckP0.Fail]

The service port of an HBase Thrift server is unavailable

emr:CloudMonitor:Maintenance[HBASE.ThriftServer.ServicePortUnAvailable]

The status of an HBase Thrift server fails to be checked

emr:CloudMonitor:Maintenance[HBASE.ThriftServer.StatusCheck.Fail]

The availability of the Hadoop Distributed File System (HDFS) fails to be checked

emr:CloudMonitor:Maintenance[HDFS.AvailabilityStatusCheck.Fail]

The data transmission port of a DataNode is unavailable

emr:CloudMonitor:Maintenance[HDFS.DataNode.DataTransferPortUnAvailable]

A dead DataNode exists in the HDFS

emr:CloudMonitor:Maintenance[HDFS.DataNode.DeadDataNodesExist]

An exception occurs in secureMain of a DataNode

emr:CloudMonitor:Maintenance[HDFS.DataNode.ExceptionInSecureMain]

A DataNode unexpectedly exits

emr:CloudMonitor:Maintenance[HDFS.DataNode.ExitUnexpected]

One or more damaged disks exist in a DataNode

emr:CloudMonitor:Maintenance[HDFS.DataNode.FailueVolumes]

GC for a DataNode fails to be checked (P0)

emr:CloudMonitor:Maintenance[HDFS.DataNode.GcCheckP0.Fail]

The IPC port of a DataNode is unavailable

emr:CloudMonitor:Maintenance[HDFS.DataNode.IpcPortUnAvailable]

A DataNode cannot create a native thread due to an out-of-memory (OOM) error that occurs in the DataNode

emr:CloudMonitor:Maintenance[HDFS.DataNode.OOM.UnableToCreateNewNativeThread]

Java heap space causes an OOM error

emr:CloudMonitor:Maintenance[HDFS.DataNode.OomForJavaHeapSpace]

The status of a DataNode fails to be checked

emr:CloudMonitor:Maintenance[HDFS.DataNode.StatusCheck.Fail]

Excessive dead DataNodes exists in the HDFS

emr:CloudMonitor:Maintenance[HDFS.DataNode.TooManyDataNodeDead]

One or more damaged disks exist in the HDFS

emr:CloudMonitor:Maintenance[HDFS.DataNode.VolumeFailuresExist]

The high availability (HA) status of the HDFS fails to be checked

emr:CloudMonitor:Maintenance[HDFS.HaStateCheck.Fail]

GC for a JournalNode fails to be checked (P0)

emr:CloudMonitor:Maintenance[HDFS.JournalNode.GcCheckP0.Fail]

The Remote Procedure Call (RPC) port of a JournalNode is unavailable

emr:CloudMonitor:Maintenance[HDFS.JournalNode.RpcPortUnAvailable]

The status of a JournalNode fails to be checked

emr:CloudMonitor:Maintenance[HDFS.JournalNode.StatusCheck.Fail]

A switchover occurs between active and standby NameNodes

emr:CloudMonitor:Maintenance[HDFS.NameNode.ActiveStandbySwitch]

The block capacity of a NameNode is running out

emr:CloudMonitor:Maintenance[HDFS.NameNode.BlockCapacityNearUsedUp]

Both NameNodes are active

emr:CloudMonitor:Maintenance[HDFS.NameNode.BothActive]

Both NameNodes are standby

emr:CloudMonitor:Maintenance[HDFS.NameNode.BothStandy]

One or more damaged blocks exist in the HDFS

emr:CloudMonitor:Maintenance[HDFS.NameNode.CorruptBlocksOccured]

A directory is formatted in the HDFS

emr:CloudMonitor:Maintenance[HDFS.NameNode.DirectoryFormatted]

A NameNode unexpectedly exits

emr:CloudMonitor:Maintenance[HDFS.NameNode.ExitUnexpectely]

GC for a NameNode fails to be checked (P0)

emr:CloudMonitor:Maintenance[HDFS.NameNode.GcCheckP0.Fail]

GC for a NameNode fails to be checked (P1)

emr:CloudMonitor:Maintenance[HDFS.NameNode.GcCheckP1.Fail]

A NameNode remains in safe mode for a long time

emr:CloudMonitor:Maintenance[HDFS.NameNode.InSafeMode]

The IPC port of a NameNode is unavailable

emr:CloudMonitor:Maintenance[HDFS.NameNode.IpcPortUnAvailable]

An exception occurs when a NameNode loads FsImage

emr:CloudMonitor:Maintenance[HDFS.NameNode.LoadFsImageException]

A NameNode is in safe mode due to insufficient disk space

emr:CloudMonitor:Maintenance[HDFS.NameNode.LowAvailableDiskSpaceAndInSafeMode]

A data block is missing in the HDFS

emr:CloudMonitor:Maintenance[HDFS.NameNode.MissingBlock]

An OOM error occurs in a NameNode

emr:CloudMonitor:Maintenance[HDFS.NameNode.OOM]

A NameNode does not have sufficient resources

emr:CloudMonitor:Maintenance[HDFS.NameNode.ResourceLow]

The RPC call queue of a NameNode is long

emr:CloudMonitor:Maintenance[HDFS.NameNode.RpcPortCallQueueLengthTooLong]

The status of a NameNode fails to be checked

emr:CloudMonitor:Maintenance[HDFS.NameNode.StatusCheck.Fail]

A NameNode fails to synchronize logs

emr:CloudMonitor:Maintenance[HDFS.NameNode.SyncJournalFailed]

Excessive block space is used in the HDFS

emr:CloudMonitor:Maintenance[HDFS.NameNode.TooMuchBlockCapacityUsed]

Excessive DataNode space is used

emr:CloudMonitor:Maintenance[HDFS.NameNode.TooMuchDataNodeCapacityUsed]

Excessive storage space is used in the HDFS

emr:CloudMonitor:Maintenance[HDFS.NameNode.TooMuchDfsCapacityUsed]

Excessive files and blocks consume a large amount of heap memory

emr:CloudMonitor:Maintenance[HDFS.NameNode.TooMuchHeapUsedByTooManyFilesAndBlocks]

Data write by the HDFS to a JournalNode times out

emr:CloudMonitor:Maintenance[HDFS.NameNode.WriteToJournalNodeTimeout]

The ZKFailoverController (ZKFC) triggers a switchover between active and standby NameNodes

emr:CloudMonitor:Maintenance[HDFS.ZKFC.ActiveStandbySwitchOccured]

The port of the HDFS ZKFC is unavailable

emr:CloudMonitor:Maintenance[HDFS.ZKFC.PortUnAvailable]

The status of the ZKFC fails to be checked

emr:CloudMonitor:Maintenance[HDFS.ZKFC.StatusCheck.Fail]

A transport layer exception occurs when the ZKFC monitors the health status of a NameNode

emr:CloudMonitor:Maintenance[HDFS.ZKFC.TransportLevelExceptionInMonitorHealth]

The ZKFC cannot connect to the ZooKeeper quorum

emr:CloudMonitor:Maintenance[HDFS.ZKFC.UnableToConnectToQuorum]

The ZKFC cannot be started

emr:CloudMonitor:Maintenance[HDFS.ZKFC.UnableToStartZKFC]

The availability of Apache Hive fails to be checked

emr:CloudMonitor:Maintenance[HIVE.AvailabilityStatusCheck.Fail]

The communication link of a Hive Metastore (HMS) database fails

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.DataBaseCommunicationLinkFailure]

Fail to connect to an HMS database

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.DataBaseConnectionFailed]

An HMS database runs out of disk space

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.DataBaseDiskQuotaUsedup]

The port for communication between the HMS and HiveServer2 is unavailable

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.hiveServer2PortUnAvailable]

A Java Database Connectivity (JDBC) exception occurs in the HMS

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.JdbcCommunicationException]

The number of queries for the HMS exceeds the upper limit

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.MaxQuestionsExceeded]

The number of updates for the HMS exceeds the upper limit

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.MaxUpdatesExceeded]

The number of user connections for the HMS exceeds the upper limit

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.MaxUserConnectionExceeded]

An OOM error occurs in the HMS

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.OomOccured]

An error occurs when the HMS configuration file is parsed

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.ParseConfError]

The port of the HMS is unavailable

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.PortUnAvailable]

The required table of the HMS is missing

emr:CloudMonitor:Maintenance[HIVE.HiveMetaStore.RequiredTableMissing]

GC for HiveServer fails to be checked (P0)

emr:CloudMonitor:Maintenance[HIVE.HiveServer.GcCheckP0.Fail]

GC for HiveServer fails to be checked (P1)

emr:CloudMonitor:Maintenance[HIVE.HiveServer.GcCheckP1.Fail]

The status of HiveServer fails to be checked

emr:CloudMonitor:Maintenance[HIVE.HiveServer.StatusCheck.Fail]

Unable to connect to HiveServer2 by using the provided uniform resource identifiers (URIs)

emr:CloudMonitor:Maintenance[HIVE.HiveServer2.CannotConnectByAnyURIsProvided]

The connection between HiveServer2 and ZooKeeper times out

emr:CloudMonitor:Maintenance[HIVE.HiveServer2.ConnectToZkTimeout]

An error occurs when the HiveServer2 configuration is parsed

emr:CloudMonitor:Maintenance[HIVE.HiveServer2.ErrorParseConf]

An error occurs when HiveServer2 is started

emr:CloudMonitor:Maintenance[HIVE.HiveServer2.ErrorStartingHiveServer]

HiveServer2 fails to initialize a Metastore client

emr:CloudMonitor:Maintenance[HIVE.HiveServer2.FailedInitMetaStoreClient]

HiveServer2 fails to connect to the Metastore server

emr:CloudMonitor:Maintenance[HIVE.HiveServer2.FailedToConnectToMetaStoreServer]

An OOM error occurs in HiveServer2

emr:CloudMonitor:Maintenance[HIVE.HiveServer2.HiveServer2OOM]

The latency of the Metastore fails to be checked (P0)

emr:CloudMonitor:Maintenance[HIVE.MetaStore.DelayCheckP0.Fail]

The latency of the Metastore fails to be checked (P1)

emr:CloudMonitor:Maintenance[HIVE.MetaStore.DelayCheckP1.Fail]

GC for the Metastore fails to be checked (P0)

emr:CloudMonitor:Maintenance[HIVE.MetaStore.GcCheckP0.Fail]

GC for the Metastore fails to be checked (P1)

emr:CloudMonitor:Maintenance[HIVE.MetaStore.GcCheckP1.Fail]

The status of the MetaStore fails to be checked

emr:CloudMonitor:Maintenance[HIVE.MetaStore.StatusCheck.Fail]

Stuttering occurs on a host due to high CPU utilization

emr:CloudMonitor:Maintenance[HOST.CpuStuck]

The memory usage is high

emr:CloudMonitor:Maintenance[HOST.HighMemoryUsage]

The size of the absolute memory available to a host is small

emr:CloudMonitor:Maintenance[HOST.LowAbsoluteFreeMemory]

The size of the available space of the /mnt/disk1 directory is small

emr:CloudMonitor:Maintenance[HOST.LowDiskForMntDisk1]

The size of the disk space available for the root file system is small

emr:CloudMonitor:Maintenance[HOST.LowRootfsDisk]

An OOM error occurs in the /var/log/message directory on a host

emr:CloudMonitor:Maintenance[HOST.OomFoundInVarLogMessage]

Excessive processes exist on a primary node

emr:CloudMonitor:Maintenance[HOST.TooManyProcessesOnMasterHost]

A host is shut down

emr:CloudMonitor:Maintenance[HOST.VmHostShutDown]

A host is started

emr:CloudMonitor:Maintenance[HOST.VmHostStartUp]

The management port of Oozie is unavailable

emr:CloudMonitor:Maintenance[HUE.OozieAdminPortUnAvailable]

The service port of HUE is unavailable

emr:CloudMonitor:Maintenance[HUE.PortUnAvailable]

The status of the HUE RunCherryPyServer fails to be checked

emr:CloudMonitor:Maintenance[HUE.RunCherryPyServer.StatusCheck.Fail]

The status of HUE fails to be checked

emr:CloudMonitor:Maintenance[HUE.StatusCheck.Fail]

The availability of Apache Impala fails to be checked

emr:CloudMonitor:Maintenance[IMPALA.AvailableCheck.Fail]

The availability of the Impala Catalog daemon (catalogd) fails to be checked

emr:CloudMonitor:Maintenance[IMPALA.Catalogd.AvailableCheck.Fail]

The availability of the Impala daemon (impalad) fails to be checked

emr:CloudMonitor:Maintenance[IMPALA.Impalad.AvailableCheck.Fail]

The availability of the Impala Statestore daemon (statestored) fails to be checked

emr:CloudMonitor:Maintenance[IMPALA.StateStored.AvailableCheck.Fail]

The status of the JindoFS Manager service fails to be checked

emr:CloudMonitor:Maintenance[JINDOFS.JindoFsManagerService.StatusCheck.Fail]

The status of the JindoFS Namespace service fails to be checked

emr:CloudMonitor:Maintenance[JINDOFS.JindoFsNamespaceStatusCheck.Fail]

The status of the JindoFS Storage service fails to be checked

emr:CloudMonitor:Maintenance[JINDOFS.JindoFsStorageServiceStatusCheck.Fail]

The status of JindoFS fails to be checked

emr:CloudMonitor:Maintenance[JINDOFS.StatusCheck.Fail]

The availability of a Kafka broker fails to be checked

emr:CloudMonitor:Maintenance[KAFKA.Broker.AvailableCheck.Fail]

GC for a Kafka broker fails to be checked (P0)

emr:CloudMonitor:Maintenance[KAFKA.Broker.GcCheckP0.Fail]

GC for a Kafka broker fails to be checked (P1)

emr:CloudMonitor:Maintenance[KAFKA.Broker.GcCheckP1.Fail]

The status of a Kafka broker fails to be checked

emr:CloudMonitor:Maintenance[KAFKA.Broker.StateCheck.Fail]

Kafka Manager fails to be checked

emr:CloudMonitor:Maintenance[KAFKA.KafkaManager.Check.Fail]

The Kafka metadata monitor fails to be checked

emr:CloudMonitor:Maintenance[KAFKA.KafkaMetadataMonitor.Check.Fail]

The Kafka REST Proxy fails to be checked

emr:CloudMonitor:Maintenance[KAFKA.RestProxy.Check.Fail]

The Kafka Schema Registry fails to be checked

emr:CloudMonitor:Maintenance[KAFKA.SchemaRegistry.Check.Fail]

GC for Knox fails to be checked

emr:CloudMonitor:Maintenance[KNOX.GcCheckP0.Fail]

The status of Knox fails to be checked

emr:CloudMonitor:Maintenance[KNOX.StatusCheck.Fail]

The health status of Apache Kudu fails to be checked

emr:CloudMonitor:Maintenance[KUDU.HealthyCheck.Fail]

The status of a Kudu master fails to be checked

emr:CloudMonitor:Maintenance[KUDU.MasterStatusCheck.Fail]

The status of a Kudu tserver fails to be checked

emr:CloudMonitor:Maintenance[KUDU.TServerStatusCheck.Fail]

GC for Apache Livy fails to be checked

emr:CloudMonitor:Maintenance[LIVY.GcCheckP0.Fail]

The status of Apache Livy fails to be checked

emr:CloudMonitor:Maintenance[LIVY.StatusCheck.Fail]

GC for Apache Oozie fails to be checked

emr:CloudMonitor:Maintenance[OOZIE.GcCheckP0.Fail]

The status of Apache Oozie fails to be checked

emr:CloudMonitor:Maintenance[OOZIE.StatusCheck.Fail]

The status of OpenLDAP fails to be checked

emr:CloudMonitor:Maintenance[OPENLDAP.StatusCheck.Fail]

The availability of Presto fails to be checked

emr:CloudMonitor:Maintenance[PRESTO.AvailabilityStatusCheck.Fail]

GC for a Presto coordinator fails to be checked

emr:CloudMonitor:Maintenance[PRESTO.Coordinator.GcCheckP0.Fail]

The status of a Presto coordinator fails to be checked

emr:CloudMonitor:Maintenance[PRESTO.Coordinator.StatusCheck.Fail]

GC for a Presto worker fails to be checked

emr:CloudMonitor:Maintenance[PRESTO.Worker.GcCheckP0.Fail]

The status of a Presto worker fails to be checked

emr:CloudMonitor:Maintenance[PRESTO.Worker.StatusCheck.Fail]

GC for a Ranger administrator fails to be checked

emr:CloudMonitor:Maintenance[RANGER.ADMIN.GcCheck.Fail]

The status of a Ranger administrator fails to be checked

emr:CloudMonitor:Maintenance[RANGER.ADMIN.StatusCheck.Fail]

The status of the Ranger Solr component fails to be checked

emr:CloudMonitor:Maintenance[RANGER.Solr.StatusCheck.Fail]

The status of Ranger UserSync fails to be checked

emr:CloudMonitor:Maintenance[RANGER.UserSync.StatusCheck.Fail]

GC for the Spark history server fails to be checked

emr:CloudMonitor:Maintenance[SPARK.HistoryServer.GcCheckP0.Fail]

The status of the Spark history server fails to be checked

emr:CloudMonitor:Maintenance[SPARK.HistoryServer.StatusCheck.Fail]

An OOM error occurs on the Spark history server

emr:CloudMonitor:Maintenance[SPARK.SparkHistory.OomOccured]

The status of the Spark Thrift server fails to be checked

emr:CloudMonitor:Maintenance[SPARK.ThriftServer.StatusCheck.Fail]

The Storm Nimbus Thrift port is unavailable

emr:CloudMonitor:Maintenance[STORM.Nimbus.ThriftPortUnAvailable]

The status of Apache Superset fails to be checked

emr:CloudMonitor:Maintenance[SUPERSET.StatusCheck.Fail]

GC for the TEZ Tomcat fails to be checked

emr:CloudMonitor:Maintenance[TEZ.Tomcat.GcCheckP0.Fail]

The status of the TEZ Tomcat fails to be checked

emr:CloudMonitor:Maintenance[TEZ.Tomcat.StatusCheck.Fail]

GC for AppTimeLine fails to be checked (P0)

emr:CloudMonitor:Maintenance[YARN.AppTimeLine.GcCheckP0.Fail]

The status of AppTimeLine fails to be checked

emr:CloudMonitor:Maintenance[YARN.AppTimeLine.StatusCheck.Fail]

The HA status of Yarn fails to be checked

emr:CloudMonitor:Maintenance[YARN.HaStateCheck.Fail]

The JobHistory service unexpectedly exits

emr:CloudMonitor:Maintenance[YARN.JobHistory.ExitUnExpectedly]

GC for JobHistory fails to be checked (P0)

emr:CloudMonitor:Maintenance[YARN.JobHistory.GcCheckP0.Fail]

The service port of JobHistory is unavailable

emr:CloudMonitor:Maintenance[YARN.JobHistory.PortUnAvailable]

An error occurs when the JobHistory service is started

emr:CloudMonitor:Maintenance[YARN.JobHistory.StartingError]

The status of JobHistory fails to be checked

emr:CloudMonitor:Maintenance[YARN.JobHistory.StatusCheck.Fail]

One or more dead NodeManagers are detected

emr:CloudMonitor:Maintenance[YARN.NodeManager.DeadNodeDetected]

A NodeManager fails to start NodeStatusUpdater

emr:CloudMonitor:Maintenance[YARN.NodeManager.ErrorRebootingNodeStatusUpdater]

GC for a NodeManager fails to be checked (P0)

emr:CloudMonitor:Maintenance[YARN.NodeManager.GcCheckP0.Fail]

One or more NodeManagers are missing

emr:CloudMonitor:Maintenance[YARN.NodeManager.LostNodesExist]

An OOM error occurs in a NodeManager

emr:CloudMonitor:Maintenance[YARN.NodeManager.OOM]

An error occurs when a NodeManager is started

emr:CloudMonitor:Maintenance[YARN.NodeManager.StartingError]

The status of a NodeManager fails to be checked

emr:CloudMonitor:Maintenance[YARN.NodeManager.StatusCheck.Fail]

A NodeManager becomes unhealthy due to disk errors

emr:CloudMonitor:Maintenance[YARN.NodeManager.UnHealthyForDiskFailed]

One or more unhealthy NodeManagers exist in Yarn

emr:CloudMonitor:Maintenance[YARN.NodeManager.UnHealthyNodesExist]

A switchover occurs between active and standby ResourceManagers

emr:CloudMonitor:Maintenance[YARN.ResourceManager.ActiveStandbySwitch]

Both ResourceManagers are active

emr:CloudMonitor:Maintenance[YARN.ResourceManager.BothInActive]

Both ResourceManagers are standby

emr:CloudMonitor:Maintenance[YARN.ResourceManager.BothInStandby]

A ResourceManager fails to be switched to the active state

emr:CloudMonitor:Maintenance[YARN.ResourceManager.CouldNotTransitionToActive]

An error occurs when a ResourceManager is started

emr:CloudMonitor:Maintenance[YARN.ResourceManager.ErrorInStarting]

An error occurs when a ResourceManager is switched to the active state

emr:CloudMonitor:Maintenance[YARN.ResourceManager.ErrorInTransitionToActiveMode]

A ResourceManager unexpectedly exits

emr:CloudMonitor:Maintenance[YARN.ResourceManager.ExitUnexpected]

GC for a ResourceManager fails to be checked (P0)

emr:CloudMonitor:Maintenance[YARN.ResourceManager.GcCheckP0.Fail]

GC for a ResourceManager fails to be checked (P1)

emr:CloudMonitor:Maintenance[YARN.ResourceManager.GcCheckP1.Fail]

RM_HA_ID cannot be found due to the invalid configuration of a ResourceManager

emr:CloudMonitor:Maintenance[YARN.ResourceManager.InvalidConf.CannotFoundRMHAID]

An OOM error occurs in a ResourceManager

emr:CloudMonitor:Maintenance[YARN.ResourceManager.OOM]

The service port of a ResourceManager in Yarn is unavailable

emr:CloudMonitor:Maintenance[YARN.ResourceManager.PortUnAvailable]

The restart status of a ResourceManager fails to be checked

emr:CloudMonitor:Maintenance[YARN.ResourceManager.RestartCheck.Fail]

The status of a ResourceManager fails to be checked

emr:CloudMonitor:Maintenance[YARN.ResourceManager.StatusCheck.Fail]

An unknown host exception occurs in a ResourceManager

emr:CloudMonitor:Maintenance[YARN.ResourceManager.UnkownHostException]

ZKRMStateStore cannot connect to ZooKeeper in Yarn

emr:CloudMonitor:Maintenance[YARN.ResourceManager.ZKRMStateStoreCannotConnectZK]

The status of Yarn fails to be checked

emr:CloudMonitor:Maintenance[YARN.StatusCheck.Fail]

An error occurs when the Timeline server is started

emr:CloudMonitor:Maintenance[YARN.TimelineServer.ErrorInStarting]

The Timeline server unexpectedly exits

emr:CloudMonitor:Maintenance[YARN.TimelineServer.ExistUnexpectedly]

The port of the Yarn Timeline server is unavailable

emr:CloudMonitor:Maintenance[YARN.TimelineServer.PortUnAvailable]

The status of WebAppProxy fails to be checked

emr:CloudMonitor:Maintenance[YARN.WebAppProxy.StatusCheck.Fail]

The service port of the Yarn WebAppProxy server is unavailable

emr:CloudMonitor:Maintenance[YARN.WebAppProxyServer.PortUnAvailable]

The status of Zeppelin fails to be checked

emr:CloudMonitor:Maintenance[ZEPPELIN.Server.StatusCheck.Fail]

The status of the Zeppelin component fails to be checked

emr:CloudMonitor:Maintenance[ZEPPELIN.ServerCheck.Fail]

The client port of ZooKeeper is unavailable

emr:CloudMonitor:Maintenance[ZOOKEEPER.ClientPortUnAvailable]

The status of a ZooKeeper cluster fails to be checked

emr:CloudMonitor:Maintenance[ZOOKEEPER.ClusterStatusCheck.Fail]

GC for ZooKeeper fails to be checked

emr:CloudMonitor:Maintenance[ZOOKEEPER.GcCheckP0.Fail]

An active/standby switchover occurs in ZooKeeper

emr:CloudMonitor:Maintenance[ZOOKEEPER.LeaderFollowerSwitch]

The leader port of ZooKeeper is unavailable

emr:CloudMonitor:Maintenance[ZOOKEEPER.LeaderPortUnAvailable]

The peer port of ZooKeeper is unavailable

emr:CloudMonitor:Maintenance[ZOOKEEPER.PeerPortUnAvailable]

The status of a ZooKeeper process fails to be checked

emr:CloudMonitor:Maintenance[ZOOKEEPER.StatusCheck.Fail]

ZooKeeper cannot run QuorumServer

emr:CloudMonitor:Maintenance[ZOOKEEPER.UnableToRunQuorumServer]

A scaling activity fails

emr:CloudMonitor:Scaling[ScalingActivity:Failed]

A scaling activity is rejected

emr:CloudMonitor:Scaling[ScalingActivity:Rejected]

A scaling activity times out

emr:CloudMonitor:Scaling[ScalingActivity:Timeout]

The status of a service component is checked

emr:CloudMonitor:StatusCheck

For more information about the parameters defined in the CloudEvents specification, see Overview.