You can test the performance of block storage devices to monitor the capabilities of your storage devices and optimize and tune the performance of the block storage devices to ensure optimal performance. This topic describes how to use the flexible I/O tester (fio) on a Linux Elastic Compute Service (ECS) instance to measure the key performance metrics of block storage devices on the Linux instance, including the IOPS, throughput (data transmission rate), and latency (response time).
fio is an open source, powerful I/O performance benchmarking tool that can test the performance metrics of block storage devices, such as random read and write operations and sequential read and write operations.
Procedure
Testing raw disks can provide accurate test results but may destroy the file system structure of raw disks. To prevent the preceding issue, we recommend that you back up disk data by creating snapshots for the disks. For more information, see Create a snapshot.
To prevent data loss, we strongly recommend that you do not use the system disk on which the operating system resides or the disk that contains important data as the test object. We recommend that you use tools to test the block storage performance on new data disks or temporary disks that do not have important data.
If you want to perform a raw disk stress test on a system disk, we recommend that you complete the stress test and reset the operating system before you deploy services. This prevents potential issues caused by the stress test and helps ensure the long-term stable operation of the system.
Performance testing results are obtained in a test environment and are only for reference. In the real production environment, the performance of cloud disks may vary due to factors such as the network environment and concurrency access.
Connect to an ECS instance.
For more information, see Connect to a Linux instance by using a password or key.
Before you test a block storage device, make sure that the device is 4 KiB aligned.
Note4 KiB alignment on block storage devices can help reduce data transfer overheads and improve I/O performance.
sudo fdisk -lu
If the value of Start in the command output is divisible by 8, the device is 4 KiB aligned. Otherwise, perform 4 KiB alignment before you proceed with the test.
Device Boot Start End Sectors Size Id Type /dev/vda1 * 2048 83886046 83883999 40G 83 Linux
Run the following commands in sequence to install libaio and FIO:
sudo yum install libaio -y sudo yum install libaio-devel -y sudo yum install fio -y
Run the following command to switch the path:
cd /tmp
Run the test commands. For information about the commands, see the following sections:
For information about the commands used to test the performance of cloud disks, see the Commands used to test the performance of cloud disks section of this topic.
For information about the commands used to test the performance of local disks, see the Commands used to test the performance of local disks section of this topic.
Commands used to test the performance of cloud disks
The values of parameters in the sample commands are only for reference. Replace the values with actual values. For example, if the device name of the cloud disk that you want to test is /dev/vdb, replace /dev/your_device
with /dev/vdb
in the following sample commands. For more information about the parameters, see the FIO parameter settings section of this topic.
Run the following command to test the random write IOPS of a cloud disk:
sudo fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Rand_Write_Testing
Run the following command to test the random read IOPS of a cloud disk:
sudo fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Rand_Read_Testing
Run the following command to test the sequential write throughput of a cloud disk:
sudo fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Write_PPS_Testing
Run the following command to test the sequential read throughput of a cloud disk:
sudo fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Read_PPS_Testing
Run the following command to test the random write latency of a cloud disk:
sudo fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/dev/your_device -name=Rand_Write_Latency_Testing
Run the following command to test the random read latency of a cloud disk:
sudo fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/dev/your_device -name=Rand_Read_Latency_Testing
For more information, see Test the IOPS performance of an ESSD.
Commands used to test the performance of local disks
The following sample commands are applicable to local Non-Volatile Memory Express (NVMe) SSDs and local Serial Advanced Technology Attachment (SATA) HDDs.
The values of parameters in the sample commands are only for reference. Replace the values with actual values. For example, if the device name of the cloud disk that you want to test is /dev/vdb, replace /dev/your_device
with /dev/vdb
in the following sample commands. For more information about the parameters, see the FIO parameter settings section of this topic.
Run the following command to test the random write IOPS of a local disk:
sudo fio -direct=1 -iodepth=32 -rw=randwrite -ioengine=libaio -bs=4k -numjobs=4 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
Run the following command to test the random read IOPS of a local disk:
sudo fio -direct=1 -iodepth=32 -rw=randread -ioengine=libaio -bs=4k -numjobs=4 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
Run the following command to test the sequential write throughput of a local disk:
sudo fio -direct=1 -iodepth=128 -rw=write -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
Run the following command to test the sequential read throughput of a local disk:
sudo fio -direct=1 -iodepth=128 -rw=read -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
Run the following command to test the random write latency of a local disk:
sudo fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
Run the following command to test the random read latency of a local disk:
sudo fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
Run the following command to test the sequential write latency of a local disk:
sudo fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
Run the following command to test the sequential read latency of a local disk:
sudo fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
For information about how to test the performance of local disks on an i4p instance, see Test the performance of local disks on an i4p instance.
fio parameters
The following table describes the parameters in the preceding fio commands that are used to test disk performance.
Parameter | Description |
direct | Specifies whether to use direct I/O. Default value: 1. Valid values:
|
iodepth | The I/O queue depth during the test. For example, if you set the |
rw | The read/write policy that is used during the test. Valid values:
|
ioengine | The I/O engine that fio uses to test disk performance. In most cases, libaio is used. For information about other available I/O engines, see the fio documentation. |
bs | The block size of I/O units. Default value: 4k, which indicates 4 KiB. Values for reads and writes can be specified in the <Value for reads>,<Value for writes> format. If you do not specify a value, the default value is used. |
size | The size of the test files. fio ends the test only after the specified size of the files is read or written, unless limited by specific factors such as runtime. If the parameter is not specified, fio uses the size of all given files or devices. The valid values can also be a percent that ranges from 1 to 100. For example, if the size parameter is set to 20%, fio uses 20% of the size of all given files or devices. |
numjobs | The number of concurrent threads that are used during the test. Default value: 1. |
runtime | The duration of the test, which indicates the period of time for which fio runs. If this parameter is not specified, the test does not end until the files whose size is specified by the size parameter are read or written in the block size specified by the bs parameter. |
group_reporting | The display mode of the test results. If this parameter is specified, per-process statistics instead of per-task statistics are displayed. |
filename | The path of the object that you want to test. The path can be the device name of the disk or a file address. In this topic, the test object of fio is an entire disk that does not have file systems (a raw disk). To prevent the data of other disks from being damaged, replace /dev/your_device in the preceding commands with your actual path. |
name | The name of the test. You can specify the parameter based on your needs. In the preceding examples, Rand_Write_Testing is used. |
For more information about the parameters, see fio(1) - Linux man page.