Function Compute supports the following billing methods: trial quotas, pay-as-you-go, and resource plans. CU usage is used as a unified billable item. This topic describes the unit prices of CU usage and the conversion factors that are used to convert the number of function invocations, active vCPU usage, idle vCPU usage, memory usage, disk usage, active GPU usage, and idle GPU usage to CU usage.
You can log on to the Function Compute console and view the following information in the Global Statistics section of the Overview page: the number of function invocations, active vCPU usage, idle vCPU usage, memory usage, disk usage, active GPU usage (including Tesla series and Ada series), and idle GPU usage (including Tesla series and Ada series). You can use the price calculator to convert the resource usage into CU usage and calculate the total fee. The resource usage of all RAM users is summed to the Alibaba Cloud account. The statistics are collected and billed in the Alibaba Cloud account.
Starting from 00:00 on January 5, 2024, Cloud Data Transfer (CDT) is used to bill outbound Internet traffic of Function Compute. You are charged for Internet traffic based on the billing rules of CDT. For more information, see Supported services. For more information, see [Product changes] Change of free Internet traffic quota.
Starting from August 27, 2024, the original billable items of Function Compute, including the number of function invocations, active vCPU usage, idle vCPU usage, memory usage, disk usage, active GPU usage, and idle GPU usage, are no longer used. The preceding resource usage is converted to CU usage based on the conversion factors. You are charged based on the prices of CU usage. The CU conversion factor varies based on resource type. For more information, see Conversion factors.
If resources of other Alibaba Cloud services are consumed when you use Function Compute, pay attention to the billing of the related services.
Billing methods
Trial quotas
Function Compute provides a free trial CU plan for users who activate Function Compute for the first time. If you do not purchase other types of resource plans, the resource usage that exceeds the trial quota in each cycle is billed on a pay-as-you-go basis. For more information, see Trial quotas.
Resource plans
Function Compute provides CU resource plans of five sizes. After you buy a resource plan, the resource plan is preferentially used to offset resource usage. You are charged on a pay-as-you-go basis when the quota in the resource plan is used up. Resource plans allow you to use the same amount of resources at more favorable prices. This helps you reduce costs. For more information, see Resource plans.
Pay-as-you-go
You are charged based on computing resources that you actually consume. For more information, see Pay-as-you-go.
Prices
CU usage is billed monthly on a tiered basis. The following table describes the details.
Tier | CU usage (CU) | Unit price | Discounted unit price August 27, 2024 to August 27, 2025 |
1 | (0, 100 million] | USD 0.000020/CU | USD 0.0000160/CU |
2 | (100 million, 500 million] | USD 0.000017/CU | USD 0.0000136/CU |
3 | > 500 million | USD 0.000014/CU | USD 0.0000112/CU |
Conversion factors
The original billable items of Function Compute, including the number of function invocations, active vCPU usage, idle vCPU usage, memory usage, disk usage, active GPU usage, and idle GPU usage are converted to CU usage based on the following formula: Resource usage × Conversion factor = CU Usage.
The following table lists the conversion factors.
Billable item | Number of function invocations | Active vCPU usage | Idle vCPU usage | Memory usage | Disk usage | Tesla series active GPU usage | Tesla series idle GPU usage | Ada series active GPU usage | Ada series idle GPU usage |
Unit | CU/10,000 invocations | CU/vCPU-second | CU/vCPU-second | CU/GB-second | CU/GB-second | CU/GB-second | CU/GB-second | CU/GB-second | CU/GB-second |
CU conversion factor | 75 | 1 | 0 | 0.15 | 0.05 | 2.1 | 0.5 | 1.5 | 0.25 |
Terms
Idle mode: Function Compute supports the idle mode feature. After the idle mode feature is enabled, elastic instances and GPU-accelerated instances in Function Compute are classified into active and idle instances based on whether they are processing requests.
Active instance: instances that are processing requests.
Idle instance: instances that are not processing requests after the idle mode is enabled.
Execution duration: Instances in Function Compute can be used in the provisioned and on-demand modes. Measurement of execution duration of instances in the preceding two modes varies. For more information, see Instance types and usage modes.
On-demand mode: Function Compute automatically allocates and releases function instances. The billing of an on-demand function instance starts when the function instance starts to execute requests and ends when the requests are executed.
Provisioned mode: Function instances are allocated, released, and managed by yourself. The billing of a provisioned instance starts when Function Compute allocates the instance and ends when you release the instance.
In provisioned mode, you are charged for instances until you release the instances, even if the instances do not process any requests. If your provisioned instances do not process any requests and fees continue to incur, release the instances at the earliest opportunity. For more information, see Configure auto scaling rules.
Billing examples
Assume that you have consumed the following resources in a month: 800 million vCPU-seconds of vCPU usage, 2 billion GB-seconds of memory usage, 0 GB-seconds of disk usage, 100 million GB-seconds of active GPU usage (Tesla series), 400 million GB-seconds of idle GPU usage (Tesla series), and 12 billion function invocations. The following table shows the CU usage and total cost.
Resource usage type | Total usage | Conversion factor | Converted CU usage |
Active vCPU usage | 800,000,000 vCPU-seconds | 1 CU/vCPU-second | 800,000,000 CUs |
Memory usage | 2,000,000,000 GB-seconds | 0.15 CU/GB-second | 300,000,000 CUs |
Disk usage | 0 GB-seconds | 0.05 CU/GB-second Note: The disk size of 512 MB is free. You are charged for disk capacity exceeding 512 MB. | 0 CU |
Tesla series active GPU usage | 100,000,000 GB-seconds | 2.1 CU/GB-second | 210,000,000 CUs |
Tesla series idle GPU usage | 400,000,000 GB-seconds | 0.5 CU/GB-second | 200,000,000 CUs |
Number of function invocations | 12,000,000,000 invocations | 0.0075 CU/invocation | 90,000,000 CUs |
Total CU usage: 1,600,000,000 CUs |
Fee = Tier 1 unit price × Tier 1 usage + Tier 2 unit price × Tier 2 usage + Tier 3 unit price × Tier 3 usage = USD 0.000020/CU × 100,000,000 CUs + USD 0.000017/CU × 400,000,000 CUs + USD 0.000014/CU × 1,100,000,000 CUs = USD 24,200
The vCPU usage, memory usage, disk usage, and GPU usage are calculated based on the specifications that you configure for your function and durations of usage, not based on the actual amount of consumed resources during function invocations.
Billing example of provisioned instances
Elastic instances
This section provides a billing example of provisioned elastic instances. In this example, you have created a function that has the following configurations: 0.35 vCPUs, 512 MB of memory, and 512 MB of disk size. Instances of the functions are provisioned for 50 hours, in which the instances are in active state for 10 hours and in idle state for 40 hours. A total of 1 million invocations are initiated. The following table lists the CU usage and the total billable amount.
In provisioned mode of elastic instances, memory usage and disk usage are billed based on the total execution duration. The active vCPU usage is billed based on the active execution duration.
Resource usage type | Usage | Conversion factor | Converted CU usage |
Active vCPU usage | 12,600 vCPU-seconds | 1 CU/vCPU-second | 12,600 CUs |
Idle vCPU usage | 50,400 vCPU-seconds | 0 CU/vCPU-second Note: No fees are incurred for idle vCPUs. | 0 CU |
Memory usage | 90,000 GB-seconds | 0.15 CU/GB-second | 13,500 CUs |
Disk usage | 0 GB-seconds | 0.05 CU/GB-second Note: The disk size of 512 MB is free. You are charged for disk capacity exceeding 512 MB. | 0 CU |
Number of function invocations | 1,000,000 invocations | 0.0075 CU/invocation | 7,500 CUs |
Total CU usage: 33,600 CUs |
Fee = Tier 1 unit price × Tier 1 usage = USD 0.000020/CU × 33,600 CUs = USD 0.67
GPU-accelerated instances
This section provides a billing example of GPU-accelerated instances. In this example, you have created a GPU function that has the following configurations: 16 GB of Tesla GPU cards, 8 vCPUs, 32 GB of memory, and 512 MB of disk size. Instances of the function are provisioned for 50 hours, in which the instances are in active state for 10 hours and in idle state for 40 hours. A total of 1 million invocations are initiated. The following table lists the CU usage and the total billable amount.
In provisioned mode of GPU-accelerated instances, memory usage and disk usage are billed based on the total execution duration. The active vCPU and GPU usage is billed based on the active execution duration. vCPUs and GPUs of GPU-accelerated instances are frozen when no requests are made to the instances.
Resource usage type | Usage | Conversion factor | Converted CU usage |
Active vCPU usage | 288,000 vCPU-seconds | 1 CU/vCPU-second | 288,000 CUs |
Idle vCPU usage | 1,152,000 vCPU-seconds | 0 CU/vCPU-second Note: No fees are incurred for idle vCPUs. | 0 CU |
Memory usage | 5,760,000 GB-seconds | 0.15 CU/GB-second | 864,000 CUs |
Disk usage | 0 GB-seconds | 0.05 CU/GB-second Note: The disk size of 512 MB is free. You are charged for disk capacity exceeding 512 MB. | 0 CU |
Tesla series active GPU usage | 576,000 GB-seconds | 2.1 CU/GB-second | 1,209,600 CUs |
Tesla series idle GPU usage | 2,304,000 GB-seconds | 0.5 CU/GB-second | 1,152,000 CUs |
Number of function invocations | 1,000,000 invocations | 0.0075 CU/invocation | 7,500 CUs |
Total CU usage: 3,521,100 CUs |
Fee = Tier 1 unit price × Tier 1 usage = USD 0.000020/CU × 3,521,100 CUs = USD 70.42