In the Function Compute console, you can query the overview metrics of Function Compute resources and details of monitoring metrics at the region, function, and instance dimensions. You can specify metric names to monitor the corresponding metrics. This topic describes the monitoring metrics of Function Compute.
Resource metrics
You can log on to the Function Compute console to view overview resource metrics in the Resource Usage Statistics section on the Overview page.
Resource overview metrics are used to monitor and measure the overall resource usage and network traffic of Function Compute in all regions or in a specific region. The following table describes the metrics. All metrics are summed at a one-day or one-month granularity.
Category | Metric name | Unit | Description |
Overview | Invocations | Count | The total number of requests for function invocations. |
vCPU Usage | vCPU-second | The vCPU resources consumed during function invocations. The value is calculated by multiplying vCPU sizes by function execution durations. | |
Memory Usage | GB-second | The amount of memory consumed during function invocations. The value is calculated by multiplying memory sizes by function execution durations. | |
Disk Usage | GB-second | The disk resources consumed during function invocations. The value is calculated by multiplying disk sizes by function execution durations. | |
Outbound Internet Traffic | GB | The total outbound Internet traffic that is generated during function executions within a specified statistical period. | |
GPU Usage | GB-second | The GPU resources consumed during function invocations. The value is calculated by multiplying GPU sizes by function execution durations. | |
vCPU usage | Active vCPU Usage | vCPU-second | The vCPU resources consumed by active instances during function invocations. The value is calculated by multiplying vCPU sizes by function execution durations. |
Idle vCPU Usage | vCPU-second | The vCPU resources consumed by idle instances during function invocations. The value is calculated by multiplying vCPU sizes by idle function durations. | |
GPU usage | Active GPU Usage | GB-second | The GPU resources consumed by active instances during function invocations. The value is calculated by multiplying GPU sizes by function execution durations. |
Idle GPU Usage | GB-second | The GPU resources consumed by idle instances during function invocations. The value is calculated by multiplying GPU sizes by idle function durations. |
Region-level metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, you can view region-level metrics.
Region-level metrics are used to monitor the resource usage of Function Compute in a region. The following table describes region-level metrics.
Category | Metric name | Unit | Description |
Function executions | RegionTotalInvocations | Count | The total number function invocations in a region. The sum is calculated every minute or every hour. |
Errors | RegionServerErrors | Count | The total number of failed function invocations in a region caused by Function Compute server errors. The sum is calculated every minute or every hour. Note HTTP trigger invocations for which |
RegionClientErrors | Count | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which HTTP
For more information, see Public error codes. Note For | |
RegionFunctionErrors | Count | The total number of failed invocations in a region caused by function errors. The sum is calculated every minute or every hour. | |
Errors due to throttling | RegionThrottles | Count | The total number of failed invocations in a region caused by excessive concurrent instances and for which the HTTP |
RegionResourceThrottles | Count | The total number of failed invocations in a region caused by excessive instances and for which the HTTP | |
Number of on-demand instances | RegionConcurrencyLimit | Count | The maximum number of on-demand instances in a region within the current account. |
RegionConcurrentCount | Count | The actual number of on-demand instances that are concurrently occupied when functions in a region are invoked. The sum is calculated every minute or every hour. | |
Number of provisioned instances | RegionProvisionedCurrentInstance | Count | The total number of provisioned instances that are created for all functions in a region within the current account. |
Service-level metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, click the name of the service whose metrics you want to view in the Service Name column.
Service-level metrics are used to monitor and measure the usage of resources from the perspective of services. The following table describes service-level metrics.
Category | Metric name | Unit | Description |
Function executions | ServiceTotalInvocations | Count | The total number of function invocations in a service. The sum is calculated every minute or every hour. |
Number of errors | ServiceServerErrors | Count | The total number of failed invocations in a service caused by Function Compute system errors. The sum is calculated every minute or every hour. Note HTTP trigger invocations for which |
ServiceClientErrors | Count | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which HTTP
For more information, see Public error codes. Note For | |
ServiceFunctionErrors | Count | The total number of failed invocations in a service caused by function errors. The sum is calculated every minute or every hour. | |
Errors due to throttling | ServiceThrottles | Count | The total number of requests for which the |
ServiceResourceThrottles | Count | The total number of requests for which the | |
Number of provisioned instances | ServiceProvisionedCurrentInstance | Count | The total number of provisioned instances for all functions in the current service. |
Function-level metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, click the name of the desired service in the Service Name column. In the Function Name section of the service-level monitoring page, click the function whose metrics you want to view.
Function-level metrics are used to monitor and measure the resource usage of functions from the perspectives of functions, all functions in a service version, and all functions with a service alias. These perspectives are managed by functions. The following table describes function-level metrics.
The prefix of metric names from the perspective of functions under service versions and aliases is
FunctionQualifier
. For example,FunctionQualifierTotalInvocations
indicates the total number of function invocations.Function Compute can monitor and measure the CPU utilization, memory usage, and network traffic of a function only after instance-level metrics are enabled. For more information about instance-level metrics, see Instance-level metrics.
Category | Metric name | Unit | Description |
Invocations | FunctionTotalInvocations | Count | The total number of function invocations in provisioned and on-demand modes. The sum is calculated every minute or every hour. |
FunctionProvisionInvocations | Count | The total number of function invocations in provisioned mode. The sum is calculated every minute or every hour. | |
HTTP status codes | FunctionHTTPStatus2xx | Count | The number of invocations with 2XX HTTP status codes returned per second. The statistics is calculated based on the granularity of 1 minute, 5 minutes, or 1 hour. |
FunctionHTTPStatus3xx | Count | The number of invocations with 3XX HTTP status codes returned per second. The statistics is calculated based on the granularity of 1 minute, 5 minutes, or 1 hour. | |
FunctionHTTPStatus4xx | Count | The number of invocations with 4XX HTTP status codes returned per second. The statistics is calculated based on the granularity of 1 minute, 5 minutes, or 1 hour. | |
FunctionHTTPStatus5xx | Count | The number of invocations with 5XX HTTP status codes returned per second. The statistics is calculated based on the granularity of 1 minute, 5 minutes, or 1 hour. | |
Number of errors | FunctionServerErrors | Count | The total number of requests for invocations of a function that failed to be executed due to Function Compute system errors. The sum is calculated every minute or every hour. Note HTTP trigger invocations for which |
FunctionClientErrors | Count | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which HTTP
For more information, see Public error codes. Note For | |
FunctionFunctionErrors | Count | The total number of failed function invocations caused by function errors. The sum is calculated every minute or every hour. | |
Errors due to throttling | FunctionConcurrencyThrottles | Count | The total number of failed invocations of a function caused by excessive concurrent instances and for which the HTTP |
FunctionResourceThrottles | Count | The total number of failed invocations of a function caused by excessive instances and for which the HTTP | |
Function execution time | FunctionAvgDuration | Milliseconds | The time from the start to the end of function executions. The time consumed by the platform is not included. The average value is calculated every minute or every hour. |
FunctionP90Duration | Milliseconds | The time from the start to the end of function executions. The time consumed by the platform is not included. P90 values are calculated based on specific granularities. A P90 value stands for a threshold below which the execution durations of 90% of invocations are. | |
FunctionP99Duration | Milliseconds | The time from the start to the end of function executions. The time consumed by the platform is not included. P99 values are calculated based on specific granularities. A P99 value stands for a threshold below which the execution durations of 99% of invocations are. | |
FunctionMaxDuration | Milliseconds | The time from the start to the end of function executions. The time consumed by the platform is not included. The maximum value is calculated every minute or every hour. | |
End-to-end latency | FunctionLatencyAvg | Milliseconds | The average amount of time consumed by function invocations. The duration starts when a function execution request arrives at Function Compute and ends when the request leaves Function Compute, including the amount of time consumed by the platform. The average amount of time is calculated every minute or hour. |
Memory usage | FunctionMemoryLimitMB | MB | The maximum amount of memory that can be used by a function when the function is invoked. If the function consumes more memory than this upper limit, an out-of-memory (OOM) error occurs. The maximum value for all instances of the function is calculated every minute or every hour. |
FunctionMaxMemoryUsage | MB | The amount of memory that is actually consumed during function executions. The maximum value for all instances of the function is calculated every minute or every hour. | |
Number of on-demand instances | FunctionOndemandInstanceQuota | Count | The maximum number of on-demand instances for a function. The value is not displayed if you do not configure the maximum number of on-demand instances. |
FunctionOndemandActiveInstance | Count | The number of on-demand instances that are actually occupied in function invocations. | |
Number of provisioned instances | FunctionProvisionedCurrentInstance | Count | The number of provisioned instances that are occupied in function executions. |
Asynchronous invocations | FunctionEnqueueCount | Count | The number of enqueued requests when a function is asynchronously invoked. The sum is calculated every minute or every hour. |
FunctionDequeueCount | Count | The number of processed requests when a function is asynchronously invoked. The sum is calculated every minute or every hour. Note If the number of processed asynchronous requests is far less than the number of enqueued asynchronous requests, a message backlog occurs. In this case, modify the provisioned instance settings or contact us. For more information, see Configure provisioned instances and auto scaling rules. | |
Latency of asynchronous messages | FunctionAsyncMessageLatencyAvg | Milliseconds | The maximum interval of time between when asynchronous requests are enqueued and when they are processed. The average value is calculated every minute or every hour. |
FunctionAsyncMessageLatencyMax | Milliseconds | The maximum interval of time between when asynchronous requests are enqueued and when they are processed. The sum is calculated every minute or every hour. | |
Events triggered during asynchronous invocations | FunctionAsyncEventExpiredDropped | Count | The total number of requests that are dropped when a destination is configured for asynchronous invocations of a function. The sum is calculated every minute or every hour. |
FunctionDestinationErrors | Count | The number of requests that fail to trigger configured destination services during function executions. The sum is calculated every minute or every hour. | |
FunctionDestinationSucceed | Count | The number of requests that successfully trigger the configured destination services during function executions. The sum is calculated every minute or every hour. | |
Asynchronous requests backlogs | FunctionAsyncMessagesBacklog | Count | The total number of pending requests in the queue when the function is asynchronously invoked. The sum is calculated every minute or every hour. Note If the number of backlog asynchronous requests is greater than 0, modify provisioned instance settings or contact us. For more information, see Configure provisioned instances and auto scaling rules. |
FunctionAsyncMessagesInProcess | Count | The approximate number of asynchronous requests that are currently being processed. | |
Number of concurrent requests (instance-level metrics) | FunctionMaxConcurrentRequests | Count | The maximum number of concurrently executed requests in function instances. The maximum value is calculated every minute or every hour. |
FunctionAvgConcurrentRequests | Count | The average number of concurrently executed requests in function instances. The average value is calculated every minute or every hour. | |
vCPU usage (instance-level metrics) | FunctionvCPUQuotaCores | vCPU | The vCPU quota of a function. |
FunctionMaxvCPUCores | vCPU | The actual maximum number of vCPUs used by functions. 1 indicates 1vCPU. The maximum value is collected every minute or every hour. | |
FunctionAvgvCPUCores | vCPU | The actual average number of vCPUs used by functions. 1 indicates 1vCPU. The maximum value is collected every minute or every hour. | |
vCPU utilization (instance-level metrics) | FunctionMaxvCPUUtilization | % | The average ratio of the actually used vCPUs to the vCPU quota. The maximum value is collected every minute or every hour. |
FunctionAvgvCPUUtilization | % | The average ratio of the actually used vCPUs to the vCPU quota. The average value is collected every minute or every hour. | |
Network traffic (instance-level metrics) | FunctionRXBytesPerSec | Mbit/s | The inbound traffic of a function in a unit of time. |
FunctionTXBytesPerSec | Mbit/s | The outbound traffic of a function in a unit of time. | |
Memory usage (instance-level metrics) | FunctionMemoryLimitMB | MB | The maximum amount of memory that can be used by a function. Note If a function actually consumes more memory than the quota, an OOM error is reported. |
FunctionMaxMemoryUsageMB | MB | The maximum memory capacity that is actually used by function instances. The maximum value is calculated every minute or every hour. | |
FunctionAvgMemoryUsageMB | MB | The average amount of memory that is actually consumed by function instances. The average value is calculated every minute or every hour. | |
Memory usage (instance-level metrics) | FunctionMaxMemoryUtilization | % | The average ratio of the amount of memory that is actually consumed by function instances to the memory quota. The maximum value is collected every minute or every hour. |
Average Usage FunctionAvgMemoryUtilization | % | The average ratio of the amount of memory that is actually consumed by function instances to the memory quota. The average value is collected every minute or every hour. | |
GPU memory usage (instance-level metrics) | FunctionGPUMemoryLimitMB | MB | The GPU memory quota. |
FunctionGPUMaxMemoryUsage | MB | The amount of used GPU memory. | |
GPU memory usage (instance-level metrics) | FunctionGPUMemoryUsagePercent | % | The GPU memory utilization. |
GPU streaming multiprocessors (SM) utilization (instance-level metrics) | FunctionGPUSMPercent | % | The SM utilization. |
GPU hardware encoder utilization (instance-level metrics) | FunctionGPUEncoderPercent | % | The hardware encoder utilization. |
GPU hardware decoder utilization (instance-level metrics) | FunctionGPUDecoderPercent | % | The hardware decoder utilization. |
More information
For more information about how to call the CloudMonitor API to view monitoring details, see Monitoring data.