All Products
Search
Document Center

Elastic Compute Service:Configure IRQ Affinity for NIC multi-queue and change the number of queues on NICs

Last Updated:Sep 02, 2024

The network interface controller (NIC) multi-queue feature allows a NIC to process data packets in multiple receive (RX) and transmit (TX) queues in parallel. When you use the NIC multi-queue feature, you must configure Interrupt Request (IRQ) Affinity to assign interrupts for different queues to specific CPUs, instead of allowing the interrupts to be assigned to arbitrary CPUs. This helps reduce contention among CPUs and improve network performance. This topic describes how to configure IRQ Affinity and change the number of queues on NICs on a Linux Elastic Compute Service (ECS) instance.

Prerequisites

  • The NIC multi-queue feature is supported by the instance type of the Linux ECS instance.

    For information about the instance types that support the NIC multi-queue feature, see Overview of instance families. If the number of NIC queues for an instance type is greater than one, the instance type supports the NIC multi-queue feature.

  • The NIC multi-queue feature is supported by the image of the Linux ECS instance.

    Important
    • Specific early-version public images that contain kernel versions earlier than 2.6 may not support the NIC multi-queue feature. We recommend that you use the latest public images.

    • By default, IRQ Affinity is enabled in all images without the need for additional configurations, except Red Hat Enterprise Linux images.

      IRQ Affinity is supported by Red Hat Enterprise Linux images but is not enabled by default for the images. To configure IRQ Affinity for instances that use Red Hat Enterprise Linux images, perform the operations that are described in this topic.

    • You can change the number of queues on NICs, configure IRQ Affinity, or perform both preceding operations to optimize network performance. To ensure load balancing, assign an appropriate number of queues to each CPU core and configure IRQ Affinity based on the actual loads and performance data of the system, such as throughput and latency. To obtain the system performance data, you can test various NIC queue and IRQ Affinity configurations.

Configure IRQ Affinity

This section describes how to use the ecs_mq script to configure IRQ Affinity for a Linux ECS instance that uses a Red Hat 9.2 image. If the image is not a Red Hat Enterprise Linux image, IRQ Affinity is enabled by default. You do not need to configure IRQ Affinity.

  1. Connect to the Linux ECS instance.

    For more information, see Connect to a Linux instance by using a password or key.

  2. (Optional) Disable the irqbalance service.

    The irqbalance service dynamically modifies IRQ Affinity configurations. The ecs_mq script conflicts with the irqblanace service. We recommend that you disable the irqbalance service.

    systemctl stop irqbalance.service
  3. Run the following command to download the package that contains the new version of the ecs_mq script:

    wget https://ecs-image-tools.oss-cn-hangzhou.aliyuncs.com/ecs_mq/ecs_mq_2.0.tgz

    Benefits of the new version of the ecs_mq script

    Compared with the old version of the ecs_mq script, the new version provides the following advantages:

    • Preferentially binds interrupts for a NIC to CPUs on the Non-Uniform Memory Access (NUMA) node with which the Peripheral Component Interconnect Express (PCIe) interface of the NIC is associated.

    • Optimizes the logic for tuning multiple network devices.

    • Binds interrupts for NICs of different specifications based on the ratio of the number of NIC queues to the number of CPUs.

    • Optimizes the mechanism of binding interrupts based on the positions of CPU siblings.

    • Resolves high latency issues that may occur during memory access across NUMA nodes.

    • Uses the new version of the ecs_mq script by default and provides the commands that can be used to switch between the old and new versions.

      • The ecs_mq_rps_rfs old command is used to switch to the old version of the ecs_mq script.

      • The ecs_mq_rps_rfs new command is used to switch to the new version of the ecs_mq script.

    Note

    In network performance tests, the new version of the ecs_mq script outperforms the old version by 5% to 30% in most PPS and BPS metrics.

  4. Run the following command to extract the ecs_mq script:

    tar -xzf ecs_mq_2.0.tgz
  5. Run the following command to change the working path:

    cd ecs_mq/
  6. Run the following command to install the environment that is required to run the ecs_mq script:

    bash install.sh redhat 9
    Note

    Replace redhat and 9 with the actual operating system name and major version number of the operating system.

  7. Run the following command to start the ecs_mq script:

    systemctl start ecs_mq

    After the script is started, IRQ Affinity is automatically enabled.

Change the number of NIC queues

In this example, an ECS instance that runs Alibaba Cloud Linux 3 is used. Alibaba Cloud Linux 3 supports the NIC multi-queue feature.

Note

Test results indicate that under identical packet forwarding rate and network bandwidth conditions, two queues outperform a single queue by 50% to 100% and performance improvements are significantly higher when four queues are used. For information about how to test network performance, see Test the network performance of an instance. You can change the number of queues on NICs based on your business requirements.

  1. Connect to the Linux ECS instance.

    For more information, see Connect to a Linux instance by using a password or key.

  2. Run the ip address show command to view the network configurations.

    image

  3. Run the following command to check whether the NIC multi-queue feature is enabled on the primary ENI eth0:

    ethtool -l eth0

    View the command output to check whether the NIC multi-queue feature is enabled.

    • If the value of the Combined field in the Pre-set maximums section is greater than 1, the NIC supports the multi-queue feature.

      You can specify the number of queues that are in effect based on the value of the Combined field in the Pre-set maximums section by running the sudo ethtool -L eth0 combined N command. In the command, N specifies the number of queues that are in effect. N must be less than or equal to the value of the Combined field in the Pre-set maximums section.

    • The value of the Combined field in the Current hardware settings section indicates the number of queues that are in effect.

    The following command output indicates that the NIC supports up to two queues and one queue is in effect.

    Channel parameters for eth0:
    Pre-set maximums:
    RX: 0
    TX: 0
    Other: 0
    Combined: 2 # This value indicates that the ENI supports up to two queues.
    Current hardware settings:
    RX: 0
    TX: 0
    Other: 0
    Combined: 1 # This value indicates that one queue is in effect on the ENI.
  4. Run the following command to configure the primary ENI to use two queues:

    sudo ethtool -L eth0 combined 2
  5. Run the following command to check whether the NIC multi-queue feature is enabled on the secondary ENI eth1:

    ethtool -l eth1

    The following command output indicates that the NIC multi-queue feature is enabled on the secondary ENI. To configure the number of queues on the secondary ENI, perform the subsequent step.

    Channel parameters for eth1:
    Pre-set maximums:
    RX: 0
    TX: 0
    Other: 0
    Combined: 4 # This value indicates that the ENI supports up to four queues.
    Current hardware settings:
    RX: 0
    TX: 0
    Other: 0
    Combined: 1 # This value indicates that one queue is in effect on the ENI.
  6. Run the following command to configure the secondary ENI to use four queues:

    sudo ethtool -L eth1 combined 4

References

For more information about IRQ Affinity, see IRQ Affinity.

For information about how to create an ENI, see Create a secondary ENI.

For information about how to bind an ENI, see Bind a secondary ENI.