In the Function Compute console, you can query the overview metrics of Function Compute resources and details of monitoring metrics at the region, function, and instance dimensions. You can specify metric names to monitor the corresponding metrics. This topic describes the monitoring metrics of Function Compute.
Resource overview metrics
You can log on to the Function Compute console to view resource overview metrics in the Resource Usage Statistics section on the Overview page.
Resource overview metrics are used to monitor and measure the overall resource usage and network traffic of Function Compute in all regions or in a specific region. The following table describes resource metrics. Values of metrics are summed on a daily or monthly basis.
Category | Metric | Unit | Description |
Overview metrics | Invocations | Count | The total number of function invocations. |
vCPU Usage | vCPU-second | The vCPU resources consumed during function invocations. The value is calculated by multiplying vCPU sizes by function execution durations. | |
Memory Usage | GB-second | The amount of memory consumed during function invocations. The value is calculated by multiplying memory sizes by function execution durations. | |
Disk Usage | GB-second | The disk resources consumed during function invocations. The value is calculated by multiplying disk sizes by function execution durations. | |
Outbound Internet Traffic | GB | The total outbound Internet traffic that is generated during function executions within a specified statistical period. | |
GPU Usage | GB-second | The GPU resources consumed during function invocations. The value is calculated by multiplying GPU sizes by function execution durations. | |
vCPU usage | Active vCPU Usage | vCPU-second | The vCPU resources consumed by active instances during function invocations. The value is calculated by multiplying vCPU sizes by function execution durations. |
Idle vCPU Usage | vCPU-second | The vCPU resources consumed by idle instances during function invocations. The value is calculated by multiplying vCPU sizes by idle function durations. | |
GPU usage | Active GPU Usage | GB-second | The GPU resources consumed by active instances during function invocations. The value is calculated by multiplying GPU sizes by function execution durations. |
Idle GPU Usage | GB-second | The GPU resources consumed by idle instances during function invocations. The value is calculated by multiplying GPU sizes by idle function durations. |
Region-level metrics
Log on to the Function Compute console. In the left-side navigation pane, choose to view metrics at the region dimension.
Region-level metrics are used to monitor the resource usage of Function Compute in a region. The following table describes the metrics.
Category | Metric | Unit | Description |
Function executions | RegionTotalInvocations | Count | The total number function invocations in a region. The sum is calculated every minute or every hour. |
Number of errors | RegionServerErrors | Count | The total number of failed function invocations in a region caused by Function Compute server errors. The statistics are collected every minute or every hour. Note HTTP trigger invocations for which |
RegionClientErrors | Count | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which an HTTP
For more information, see Public error codes. Note For | |
RegionFunctionErrors | Count | The total number of failed invocations in a region caused by function errors. The sum is calculated every minute or every hour. | |
Errors due to throttling | RegionThrottles | Count | The total number of failed invocations in a region caused by excessive concurrent instances and for which the HTTP |
RegionResourceThrottles | Count | The total number of failed invocations in a region caused by excessive instances and for which the HTTP | |
Number of on-demand instances | RegionConcurrencyLimit | Count | The maximum number of on-demand instances in a region within the current account. Default value: 300. |
RegionConcurrentCount | Count | The number of on-demand instances that are concurrently occupied when functions in a region are invoked. The sum is calculated every minute or every hour. | |
Number of provisioned instances | RegionProvisionedCurrentInstance | Count | The total number of provisioned instances that are created for all functions in a region within the current account. |
Function-level metrics
You can log on to the Function Compute console. In the left-side navigation pane, choose . In the Function Name column, click the name of the function that you want to view.
Function-level metrics are used to monitor and measure the usage of specific function resources from the perspective of functions and aliases. The function perspective and the alias perspective both pertain to the function dimension. The following table describes function-level metrics.
The prefix of metric names from the perspective of functions versions and function aliases is
FunctionQualifier
, for example,FunctionQualifierTotalInvocations
, which indicates the total number of function invocations.Function Compute can monitor and measure the CPU utilization, memory usage, and network traffic of a function only after instance-level metrics are enabled. For more information about instance-level metrics, see Instance-level metrics.
Category | Metric | Unit | Description |
Invocations | FunctionTotalInvocations | Count | The total number of function invocations in provisioned and on-demand modes. The sum is calculated every minute or every hour. |
FunctionProvisionInvocations | Count | The total number of function invocations in provisioned mode. The sum is calculated every minute or every hour. | |
HTTP status codes | FunctionHTTPStatus2xx | Count | The number of invocations with 2XX HTTP status codes returned per second. The statistics is calculated based on the granularity of 1 minute, 5 minutes, or 1 hour. |
FunctionHTTPStatus3xx | Count | The number of invocations with 3XX HTTP status codes returned per second. The statistics is calculated based on the granularity of 1 minute, 5 minutes, or 1 hour. | |
FunctionHTTPStatus4xx | Count | The number of invocations with 4XX HTTP status codes returned per second. The statistics is calculated based on the granularity of 1 minute, 5 minutes, or 1 hour. | |
FunctionHTTPStatus5xx | Count | The number of invocations with 5XX HTTP status codes returned per second. The statistics is calculated based on the granularity of 1 minute, 5 minutes, or 1 hour. | |
Number of errors | FunctionServerErrors | Count | The total number of failed invocations of a function caused by Function Compute server errors. The statistics are collected every minute or every hour. Note HTTP trigger invocations for which |
FunctionClientErrors | Count | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which an HTTP
For more information, see Public error codes. Note For | |
FunctionFunctionErrors | Count | The total number of failed function invocations caused by function errors. The sum is calculated every minute or every hour. | |
Errors due to throttling | FunctionConcurrencyThrottles | Count | The total number of failed invocations of a function caused by excessive concurrent instances and for which the HTTP |
FunctionResourceThrottles | Count | The total number of failed invocations of a function caused by excessive instances and for which the HTTP | |
Function execution time | FunctionAvgDuration | Milliseconds | The time from the start to the end of function executions. The time consumed by the platform is not included. The average value is calculated every minute or every hour. |
FunctionP90Duration | Milliseconds | The time from the start to the end of function executions. The time consumed by the platform is not included. P90 values are calculated based on specific granularities. A P90 value stands for a threshold below which the execution durations of 90% of invocations are. | |
FunctionP99Duration | Milliseconds | The time from the start to the end of function executions. The time consumed by the platform is not included. P99 values are calculated based on specific granularities. A P99 value stands for a threshold below which the execution durations of 99% of invocations are. | |
FunctionMaxDuration | Milliseconds | The time from the start to the end of function executions. The time consumed by the platform is not included. The maximum value is calculated every minute or every hour. | |
End-to-end latency | FunctionLatencyAvg | Milliseconds | The average amount of time consumed by function invocations. The duration starts when a function execution request arrives at Function Compute and ends when the request leaves Function Compute, including the amount of time consumed by the platform. The average amount of time is calculated every minute or every hour. |
Memory usage | FunctionMemoryLimitMB | MB | The maximum amount of memory that can be used by a function when the function is invoked. If the function consumes more memory than this upper limit, an out-of-memory (OOM) error occurs. The maximum value for all instances of the function is calculated every minute or every hour. |
FunctionMaxMemoryUsage | MB | The amount of memory that is actually consumed in function executions. The maximum value for all instances of the function is calculated every minute or every hour. | |
Number of on-demand instances | FunctionOndemandInstanceQuota | Count | The maximum number of on-demand instances for a function. The value is not displayed if you do not configure the maximum number of on-demand instances. |
FunctionOndemandActiveInstance | Count | The number of on-demand instances that are actually occupied in function invocations. | |
Number of provisioned instances | FunctionProvisionedCurrentInstance | Count | The number of provisioned instances that are occupied in function executions. |
Asynchronous invocations | FunctionEnqueueCount | Count | The number of enqueued requests when a function is asynchronously invoked. The sum is calculated every minute or every hour. |
FunctionDequeueCount | Count | The number of processed requests when a function is asynchronously invoked. The sum is calculated every minute or every hour. Note If the number of processed asynchronous requests is far less than the number of enqueued asynchronous requests, a message backlog occurs. In this case, modify the provisioned instance settings or contact us. For more information, see Configure provisioned instances. | |
Latency of asynchronous messages | FunctionAsyncMessageLatencyAvg | Milliseconds | The average interval of time between when asynchronous requests are enqueued and when they are processed. The average value is calculated every minute or every hour. |
FunctionAsyncMessageLatencyMax | Milliseconds | The maximum interval of time between when asynchronous requests are enqueued and when they are processed. The maximum value is calculated every minute or every hour. | |
Events triggered during asynchronous invocations | FunctionAsyncEventExpiredDropped | Count | The total number of requests that are dropped when a destination is configured for asynchronous invocations of a function. The sum is calculated every minute or every hour. |
FunctionDestinationErrors | Count | The number of requests that fail to trigger configured destination services during function executions. The sum is calculated every minute or every hour. | |
FunctionDestinationSucceed | Count | The number of requests that successfully trigger the configured destination services during function executions. The sum is calculated every minute or every hour. | |
Asynchronous request backlogs | FunctionAsyncMessagesBacklog | Count | The total number of pending enqueue requests when the function is asynchronously invoked. The statistics are collected every minute or every hour. Note If the number of backlog asynchronous requests is greater than 0, modify provisioned instance configurations or contact us. For more information, see Configure provisioned instances. |
FunctionAsyncMessagesInProcess | Count | The approximate number of asynchronous requests that are currently being processed. | |
Number of concurrent requests (instance-level metrics) | FunctionMaxConcurrentRequests | Count | The maximum number of concurrently executed requests in function instances. The maximum value is calculated every minute or every hour. |
FunctionAvgConcurrentRequests | Count | The average number of concurrently executed requests in function instances. The average value is calculated every minute or every hour. | |
vCPU usage (instance-level metrics) | FunctionvCPUQuotaCores | vCPU | The vCPU quota of the function. |
FunctionMaxvCPUCores | vCPU | The actual maximum number of vCPUs used by functions. 1 indicates 1vCPU. The maximum value is collected every minute or every hour. | |
FunctionAvgvCPUCores | vCPU | The actual average number of vCPUs used by functions. 1 indicates 1vCPU. The maximum value is collected every minute or every hour. | |
vCPU utilization (instance-level metrics) | FunctionMaxvCPUUtilization | % | The maximum ratio of actually used vCPUs to the vCPU quota. The maximum value is collected every minute or every hour. |
FunctionAvgvCPUUtilization | % | The average ratio of the actually used vCPUs to the vCPU quota. The average value is collected every minute or every hour. | |
Network traffic (instance-level metrics) | FunctionRXBytesPerSec | Mbps | The inbound traffic of a function in a unit of time. |
FunctionTXBytesPerSec | Mbps | The outbound traffic of a function in a unit of time. | |
Memory usage (instance-level metrics) | FunctionMemoryLimitMB | MB | The maximum amount of memory that can be used by a function. Note If a function actually consumes more memory than the quota, an OOM error is reported. |
FunctionMaxMemoryUsageMB | MB | The maximum memory capacity that is actually used by function instances. The maximum value is calculated every minute or every hour. | |
FunctionAvgMemoryUsageMB | MB | The average amount of memory that is actually consumed by function instances. The average value is calculated every minute or every hour. | |
Memory usage (instance-level metric) | FunctionMaxMemoryUtilization | % | The maximum ratio of the amount of memory that is actually consumed by function instances to the memory quota. The maximum value is collected every minute or every hour. |
FunctionAvgMemoryUtilization | % | The average ratio of the amount of memory that is actually consumed by function instances to the memory quota. The average value is collected every minute or every hour. | |
GPU memory usage (instance-level metrics) | FunctionGPUMemoryLimitMB | MB | The GPU memory quota. |
FunctionGPUMaxMemoryUsage | MB | The amount of used GPU memory. | |
GPU memory usage (instance-level metrics) | FunctionGPUMemoryUsagePercent | % | The GPU memory utilization. |
GPU streaming multiprocessors (SM) utilization (instance-level metrics) | FunctionGPUSMPercent | % | The SM utilization. |
GPU hardware encoder utilization (instance-level metrics) | FunctionGPUEncoderPercent | % | Hardware encoder utilization |
GPU hardware decoder utilization (instance-level metrics) | FunctionGPUDecoderPercent | % | The hardware decoder utilization. |
More information
For more information about how to call the CloudMonitor API to view monitoring details, see Monitoring data.