This topic provides answers to some frequently asked questions (FAQ) about the performance of File Storage NAS (NAS) file systems. The Server Message Block (SMB) and Network File System (NFS) protocols are supported.
What is the relationship between the performance of a file system and the storage capacity of the file system?
General-purpose NAS file systems
The read and write performance (maximum throughput) of a file system is linearly proportional to the storage capacity of the file system. A higher capacity indicates a higher throughput. For more information, see General-purpose NAS file systems.
Extreme NAS file systems
The read and write performance of a file system increases stepwise as the storage capacity increases. For more information, see Extreme NAS file systems.
What is the relationship between the performance of a file system and the directory size?
When you traverse a directory of a file system, the following conditions may cause slow responses:
The directory is being modified. For example, a file in the directory is being created, deleted, or renamed. This causes slow responses due to frequent cache invalidations.
The data size of the directory is too large. This causes slow responses due to cache evictions.
Solution
Limit the number of files stored in the directory. Store less than 10,000 files in a single directory.
Do not frequently modify the directory when you traverse the directory.
If the directory contains more than 10,000 files and you do not need to frequently modify the directory, you can mount the file system by using the NFSv3 protocol and specify the nordirplus option to accelerate the traversal process.
What are the impacts of mount options on the performance of a NAS file system?
Mount options have significant impacts on the performance of a NAS file system.
rsize and wsize:
Impact: The two mount options define the block size of the data exchange between the client and the server. A larger block size can reduce the number of network requests, which in turn improves throughput, especially when working with large files.
Recommended value: 1048576 (1 MB). We recommend that you use the maximum value whenever possible. A smaller block size may result in more network overhead, which reduces performance.
hard:
Impact: After this mount option is enabled, if File Storage NAS is unavailable, the client keeps retrying requests until the file system recovers. This ensures data integrity and consistency.
We recommend that you enable this mount option. This mount option helps prevent data loss, but may cause the application to hang temporarily. Therefore, the mount option is suitable for scenarios that require high availability.
timeo:
Impact: This mount option defines the time the client waits for a response before retrying. Setting a timeout period that is too short may result in frequent request retries, which can degrade performance, especially if the network is unstable.
Recommended value: 600 (60 seconds). The value ensures that the network has enough time to recover, thereby reducing the number of retries.
retrans:
Impact: This mount option defines the number of times the NFS client retries a failed request. A higher number of retries can increase the success rate of requests, whereas cause latency.
Recommended value: 2. The value balances performance with data reliability.
noresvport:
Impact: After this mount option is enabled, a new TCP port is used to ensure network continuity when the network recovers from a failure.
We recommend that you enable this mount option. The mount option ensures the stability of the network connection.
To prevent data inconsistency, we recommend that you do not specify the soft mount option. If you specify the soft mount option, make sure that you understand the potential risks.
We recommend that you do not set any other mount options that are different from the defaults. If you change the read or write buffer sizes or disable attribute caching, the performance may be reduced.
How does the bandwidth of ECS instances restrict the performance of a NAS file system?
The maximum throughput of a NAS file system cannot exceed the internal bandwidth of ECS instances. If the internal bandwidth is low, the throughput is limited.
What happens if the read and write throughput of a request exceeds the threshold?
If the read and write throughput of a request sent by you or your application exceeds the threshold, NAS throttles the request. In this case, the latency increases.
For a General-purpose NAS file system, you can run the truncate command to increase the throughput threshold. For more information, see How do I increase the read and write throughput threshold of a General-purpose NAS file system?
For an Extreme NAS file system, you can scale up the file system to increase the throughput threshold. For more information, see Scale up an Extreme NAS file system.
For more information about the throughput thresholds of General-purpose NAS file systems and Extreme NAS file systems, see Performance metrics of General-purpose NAS file systems and Performance metrics of Extreme NAS file systems.
How do I increase the read and write throughput threshold of a General-purpose NAS file system?
The read and write throughput of a General-purpose NAS file system linearly increases with the storage capacity of the file system. For more information about the relationship between the read and write throughput and the capacity usage of a file system, see General-purpose NAS file systems.
You can increase the capacity of the file system by writing hole files to the file system, or by running the truncate command to generate a file on the file system. Then, the read and write throughput of the file system is increased. You are charged for the space that is occupied by the hole files or the generated file in a NAS file system. For more information, see Billing of General-purpose NAS file systems.
For example, if you write a hole file of 1 TiB to a Capacity NAS file system, you can increase the read and write throughput of the file system by 150 MB/s. If you write a hole file of 1 TiB to a Performance NAS file system, you can increase the read and write throughput of the file system by 600 MB/s.
Linux
If you use Linux, you can run the truncate command to generate a file on a file system to increase the read and write throughput of the file system.
sudo truncate --size=1TB /mnt/sparse_file.txt
In the preceding command, /mnt is the mount path of the file system on the compute node.
Windows
If you use Windows, you can write hole files to a file system to increase the read and write throughput of the file system.
fsutil file createnew Z:\sparse_file.txt 1099511627776
In the preceding command, Z:\ is the mount path of the file system on the compute node.
How do I increase the throughput of accessing NAS on Linux?
Solution 1: Configure the nconnect parameter to increase the throughput of a single ECS instance to access NAS
The
nconnect
parameter is an option for mounting an NFS file system on a Linux ECS instance. You can use this parameter to establish more TCP connections between the NFS client and the ECS instance to increase the throughput. The test indicates that thenconnect
parameter can increase the throughput of a single ECS instance to access NAS by 3 to 6 times, reaching 3 GB/s.Scenarios
Multiple concurrent reads and writes are performed on a single ECS instance (more than 16 concurrent reads and writes).
Prerequisites
The Linux kernel version is 5.3 or later.
Procedure
Add the
nconnect
parameter to themount
command. We recommend that you set thenconnect
parameter to 4. The following command provides an example:sudo mount -t nfs -o vers=3,nolock,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,nconnect=4
ImportantThe nconnect parameter increases the throughput of a single ECS instance to access NAS, but does not increase the throughput threshold of the NAS file system. If you enable the nconnect parameter for single concurrency, small data blocks, and latency-sensitive services, the latency increases. We recommend that you do not enable the nconnect parameter for such services.
Solution 2: Modify sunrpc.tcp_slot_table_entries to increase the throughput of a single ECS instance to access NAS
sunrpc
in the Linux kernel determines the number of communication slots within a single NFS link. Different Linux versions adopt different sunrpc configurations. If the slot configuration is high, the latency may increase. If the slot configuration is low, the throughput may be insufficient. If you require high throughput, we recommend that you set the slot quantity to 128. If you require low latency, we recommend that you set the slot quantity to 16 or less.NoteThe effect of configuring the
sunrpc.tcp_slot_table_entries
parameter is far worse than that of thenconnect
parameter. We recommend that you reconfigure thenconnect
parameter for Linux kernel 5.3 and later.Scenarios
Multiple concurrent reads and writes on a single ECS instance are performed. The Linux kernel version is earlier than 3.10.
Procedure
For more information, see How do I change the maximum number of concurrent NFS requests from an NFS client?
Why does NGINX require a long period of time to write logs to a file system?
Background information
You can use the following two commands to specify NGINX logs: The log_format command specifies the log format. The access_log command specifies the log storage path, format name, and cache size.
Issue
NGINX requires a long period of time to write logs to the file system, which reduces the performance of the file system.
Cause
The path that is specified in the access_log command contains variables. Each time NGINX attempts to write logs to the file system, the destination files are opened. After the logs are written, the files are closed. To ensure data visibility, NAS writes the data back to the NAS server when the files are closed. This reduces the performance of the file system.
Solution
Solution 1: Delete the variables in the access_log command and store the logs in a fixed path.
Solution 2: Run the open_log_file_cache command to cache the file descriptors of frequently used logs. This improves the performance of log storage to the path that contains the variables. For more information, see open_log_file_cache.
Recommended configurations:
open_log_file_cache max=1000 inactive=1m valid=3m min_uses=2;
Why are I/O operations delayed on an SMB file system?
Issue
If you access an SMB file system by using a mount target, you must wait for several minutes before you can perform I/O operations on the file system.
Cause
You must wait for several minutes because an NFS client is installed but not used.
The Internet file server fails to access the SMB file system because the WebClient service is enabled.
The files in the file system cannot be opened because
Nfsnp
is included in the value of the ProviderOrder key.
Solution
The first time you access an SMB file system, we recommend that you ping the domain name of the mount target to check the network connectivity between the compute node and the file system and check whether the latency is within the allowed range.
If the ping command fails, check your network settings and make sure that the network is connected.
If the latency is high, run the ping command to ping the IP address of the mount target. If the latency of accessing the IP address is less than the latency of accessing the domain name, check the configurations of the Domain Name System (DNS) server.
If an NFS client is installed but not used, we recommend that you delete the NFS client.
Disable the WebClient service.
Check the Registry key in the following path: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\NetworkProvider\Order\ProviderOrder. If the value of the ProviderOrder key contains
Nfsnp
, removeNfsnp
and restart the ECS instance on which the file system is mounted.
You can use the fio tool to check the performance of the file system.
fio.exe --name=./iotest1 --direct=1 --rwmixread=0 --rw=write --bs=4K --numjobs=1 --thread --iodepth=128 --runtime=300 --group_reporting --size=5G --verify=md5 --randrepeat=0 --norandommap --refill_buffers --filename=\\<mount point dns>\myshare\testfio1
We recommend that you perform read and write operations based on large data blocks. Small data blocks consume more network resources. If you cannot change data block sizes, you can construct the BufferedOutputStream class to write data to a specified output stream with a specified buffer size.
Why are I/O operations on Windows SMB clients delayed?
Cause
By default, the
large mtu
option is disabled on Windows SMB clients. This limits the I/O performance of Windows SMB clients.Solution
You can enable the
large mtu
option by modifying the Windows registry. The registry key is stored in the following path: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanmanWorkstation\Parameters.Create a key of the
DWORD
data type and name the key DisableLargeMtu. Set the value of the key to0
. Restart the ECS instance on which the file system is mounted to validate the key.
How can I improve the performance of access from IIS to NAS?
Cause
When Internet Information Service (IIS) accesses a file in the shared directory of a NAS file system, the backend of IIS may access the shared directory multiple times. When you access the NAS file system, you must interact with the network at least once. This is different from the scenario in which you access a local disk. Although each access request does not take a long time, the client may take a long time to respond if multiple access requests are sent.
Solution
Use the SMB Redirector component to optimize the performance of SMB file systems. For more information, see SMB2 Client Redirector Caches Explained.
Modify the registry keys in the following path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanWorkstation\Parameters. Change the values of the following three keys to 600 or higher:
FileInfoCacheLifetime
FileNotFoundCacheLifetime
DirectoryCacheLifetime
NoteIf none of the preceding keys exists, perform the following steps:
Make sure that the file system uses the SMB protocol.
Check whether the Windows version supports the keys. If the Windows version supports the keys but the keys do not exist, create the keys. For more information, see Performance tuning for file servers.
If IIS frequently accesses these files, we recommend that you store the web-related files such as JS and CSS files to local disks.
If the read and write performance of IIS cannot meet your business requirements, submit a ticket.
Why does a file system give a slow response or does not respond when I run the ls command?
Issue
When you traverse a directory of a file system, the file system gives a slow response or does not respond. For example, when you run an ls command that contains the asterisk
(*)
and question mark(?)
wildcards, run therm -rf
command, or call the getdents operation.Cause
The directory is being modified. For example, a file in the directory is being created, deleted, or renamed. This causes slow responses due to frequent cache invalidations.
The data size of the directory is too large. This causes slow responses due to cache evictions.
Solution
Limit the number of files stored in the directory. Store less than 10,000 files in a single directory.
Do not frequently modify the directory when you traverse the directory.
If the directory contains more than 10,000 files and you do not need to frequently modify the directory, you can mount the file system by using the NFSv3 protocol and specify the nordirplus option to accelerate the traversal process. For more information, see Mount parameters.
How do I improve the NFS sequential read performance on Linux kernel 5.4 or later?
The read_ahead_kb
parameter of NFS defines the size (in KB) of data to be read in advance or prefetched by the Linux kernel during a sequential read operation.
For Linux kernel versions earlier than 5.4.*
, the value of the read_ahead_kb
parameter is determined by multiplying NFS_MAX_READAHEAD
and rsize
(the size of data read by the client specified in the mount option). From Linux kernel version 5.4.*
, the NFS client uses the default value of the read_ahead_kb
parameter, which is 128 KB. Therefore, we recommend that you increase the value of the read_ahead_kb
parameter to 15 MB when you use the recommended mount option.
After the file system is mounted, you can run the following commands to reset the value of the read_ahead_kb
parameter. In the commands, replace nas-mount-point
with the local path of the mounted file system and replace read-ahead-kb
with the size (in KB) of the data to be read in advance or prefetched.
device_number=$(stat -c '%d' nas-mount-point)
((major = ($device_number & 0xFFF00) >> 8))
((minor = ($device_number & 0xFF) | (($device_number >> 12) & 0xFFF00)))
sudo bash -c "echo read-ahead-kb > /sys/class/bdi/$major:$minor/read_ahead_kb"
The following commands provide an example of using /mnt
as the local path of the mounted file system to set the value of the read_ahead_kb
parameter to 15 MB (size of the data to be read in advance or prefetched):
device_number=$(stat -c '%d' /mnt)
((major = ($device_number & 0xFFF00) >> 8))
((minor = ($device_number & 0xFF) | (($device_number >> 12) & 0xFFF00)))
sudo bash -c "echo 15000 > /sys/class/bdi/$major:$minor/read_ahead_kb"