All Products
Search
Document Center

Function Compute:Configure instance concurrency

Last Updated:Jan 23, 2024

This topic describes the background information, scenarios, benefits, and limits of instance concurrency. This topic also describes how to configure instance concurrency in the Function Compute console.

Background

Function Compute calculates fees based on the execution duration of instances. For example, if the latency that you access a database is 10 seconds and three requests are concurrently processed by three instances, the total execution duration of the requests is 30 seconds. If all three requests are concurrently processed by one instance, the total execution duration of the requests is 10 seconds. To reduce your costs, Function Compute allows you to use one instance to concurrently process multiple requests. You can use the Instance Concurrency parameter to specify the number of requests that you want to concurrently process by using one instance. The following figure shows the differences between the scenarios in which an instance processes one request at a time and the scenarios in which a single instance processes multiple requests at the same time.

image

In the preceding figure, three requests need to be processed at the same time.

  • If Instance Concurrency is set to 1, each instance processes one request at the same time. Function Compute needs to create three instances to process the requests.

  • If Instance Concurrency is set to 10, each instance can process 10 requests at the same time. Function Compute needs to create only one instance to process all three requests.

Note

By default, Instance Concurrency is set to 1. That is, a single instance can process only one request at the same time. If you set Instance Concurrency to a value greater than 1, Function Compute creates a new instance only when the number of requests that are concurrently processed by existing instances exceeds the specified value.

Scenarios

If it takes a long time for a function to obtain responses from downstream services, we recommend that you use a single instance to concurrently process multiple requests. In most cases, resources are not consumed when requests are waiting for responses. If you use a single instance to concurrently process multiple requests, costs can be reduced.

Benefits

  • Reduces the execution duration and costs.

    For example, for functions that require I/O operations, you can use a single instance to concurrently process multiple requests. This reduces the number of instances that are used to process requests to reduce the total execution duration of requests.

  • Shares the status among requests.

    Multiple requests can share the connection pool of a database in one instance to minimize the connections between requests and the database.

  • Reduces the frequency of cold starts.

    One instance can process multiple requests, which reduces the number of new instances and lowers the frequency of cold starts.

  • Reduces the number of IP addresses used in a virtual private cloud (VPC).

    For a fixed number of requests to be processed, the number of required instances is reduced if each instance can process multiple requests. This reduces the number of IP addresses used in the VPC.

    Important

    Make sure that the vSwitch associated with your VPC has at least two available IP addresses. Otherwise, services may be unavailable, which causes request errors.

Impacts

This section describes the differences between the scenarios in which a single instance processes a single request at the same time (Instance Concurrency = 1) and the scenarios in which a single instance can process multiple requests at a time (Instance Concurrency > 1):

Billing

  • A single instance processes a single request at the same time

    An instance can process only one request at the same time. The billing duration starts when the first request is processed and ends when the last request is processed.

    image
  • A single instance processes multiple requests at the same time

    For a single instance that concurrently processes multiple requests, you are charged based on the execution duration of the requests that are processed by the instance. The billing duration starts when an instance processes the first request and ends when the instance processes the last request.

    image

For more information, see Billing overview.

Concurrency throttling

By default, Function Compute supports up to 300 on-demand instances in a region. The maximum number of requests that can be concurrently processed in a region is calculated by using the following formula: 300 × Instance Concurrency. For example, if you set Instance Concurrency to 10, up to 3,000 requests can be concurrently processed in a region. If the number of concurrent requests exceeds the maximum number of requests that can be processed by Function Compute, the ResourceExhausted error is returned.

Note

To increase the upper limit of on-demand instances in a region, contact us.

Logs

  • If Instance Concurrency is set to 1, Function Compute returns function logs in the X-Fc-Log-Result field of the response header if you specify X-Fc-Log-Type: Tail in the HTTP header when you invoke a function. For a single instance that concurrently processes multiple requests, the response header does not include function logs because the logs of a specific request cannot be obtained among concurrent requests.

  • For a Node.js runtime, the function console.info() was used to return logs, which include request IDs. If Instance Concurrency is set to a value greater than 1, console.info() cannot display request IDs as expected. All the request IDs are displayed as req 2. The following sample log is displayed:

    2019-11-06T14:23:37.587Z req1 [info] logger begin
    2019-11-06T14:23:37.587Z req1 [info] ctxlogger begin
    2019-11-06T14:23:37.587Z req2 [info] logger begin
    2019-11-06T14:23:37.587Z req2 [info] ctxlogger begin
    2019-11-06T14:23:40.587Z req1 [info] ctxlogger end
    2019-11-06T14:23:40.587Z req2 [info] ctxlogger end
    2019-11-06T14:23:37.587Z req2 [info] logger end
    2019-11-06T14:23:37.587Z req2 [info] logger end                    

    To print logs, we recommend that you use context.logger.info(). This method allows request IDs to be displayed as expected. The following sample code shows an example:

    exports.handler = (event, context, callback) => {
        console.info('logger begin');
        context.logger.info('ctxlogger begin');
    
        setTimeout(function() {
            context.logger.info('ctxlogger end');
            console.info('logger end');
            callback(null, 'hello world');
        }, 3000);
    };                   

Error handling

When an instance concurrently processes multiple requests, unexpected process quits that are caused by failed requests affect other concurrent requests. We recommend that you compile logic to capture request-level exceptions in the function code and prevent impacts on other requests. The following example shows the sample Node.js code:

exports.handler = (event, context, callback) => {
    try {
        JSON.parse(event);
    } catch (ex) {
        callback(ex);
    }

    callback(null, 'hello world');
};                    

Shared variables

When an instance concurrently processes multiple requests, errors may occur if multiple requests attempt to modify the same variable at the same time. You must use mutual exclusion in your code to prevent variable modifications that are not safe for threads. The following example shows the sample Java code:

public class App implements StreamRequestHandler
{
    private static int counter = 0;

    @Override
    public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
        synchronized (this) {
            counter = counter + 1;
        }
        outputStream.write(new String("hello world").getBytes());
    }
}                    

Monitoring metrics

After you set Instance Concurrency to a value greater than 1, the number of used instances is reduced in the instance monitoring chart when a fixed number of requests are processed.实例数据监控图

Limits

Item

Description

Supported runtime

  • Node.js

  • Python 3 and Python 3.9

  • Java

  • Go 1

  • .NET Core 3.1

  • Custom runtime

  • Custom Container runtime

Number of requests that can be concurrently processed by a single instance

1–200

Function execution logs provided in the X-Fc-Log-Result field in the response header

Not supported if the Instance Concurrency parameter is set to a value greater than 1

Instance concurrency for a function

When you create or update a function, you can configure the Instance Concurrency parameter. For more information, see Manage functions.

dg-instance-concurrency

If you use provisioned instances, the function in provisioned mode can concurrently process multiple requests. For more information, see Configure provisioned instances and auto scaling rules.

More information

For more information about how to use the SDK for Node.js to configure instance concurrency, see Specify the instance concurrency.