After you create a Network File System (NFS) file system, you must mount the file system on Linux Elastic Compute Service (ECS) instances. This way, multiple ECS instances can share access to the file system. This topic describes how to mount an NFS file system on a Linux ECS instance.
Prerequisites
A Linux ECS instance is available in the region where the file system is created. A public IP address or elastic IP address is configured for the ECS instance. For more information, see Creation methods.
An NFS file system is created and the mount target of the file system is obtained. The file system and the ECS instance reside in the same virtual private cloud (VPC). For more information, see Create a file system.
To ensure optimal access performance, we recommend that you mount a file system by using NFSv3.
NFSv4.0 supports file locks, including range locks. If you need to modify a file on multiple Linux ECS instances at the same time, we recommend that you mount a file system by using NFSv4.0.
The NAS console allows you to mount a file system on an ECS instance in a few clicks. We recommend that you mount file systems by using the NAS console. For more information, see Mount an NFS file system in the NAS console.
Step 1: Install an NFS client
Before you mount an NFS file system on a Linux ECS instance, you must install an NFS client. After you install the NFS client, you no longer need to install the client the next time you mount a file system on the ECS instance.
Connect to the ECS instance. For more information, see Connection methods.
Install an NFS client.
Operating system
Installation command
Alibaba Cloud Linux
sudo yum install nfs-utils
CentOS
Redhat
Ubuntu
Run the following commands in sequence:
sudo apt-get update
sudo apt-get install nfs-common
Debian
Increase the maximum number of concurrent NFS requests.
Run the following command to set the maximum number of concurrent NFS requests to 128. For more information, see How do I change the maximum number of concurrent NFS requests from an NFS client?
if (lsmod | grep sunrpc); then (modinfo sunrpc | grep tcp_max_slot_table_entries) && sysctl -w sunrpc.tcp_max_slot_table_entries=128 (modinfo sunrpc | grep tcp_slot_table_entries) && sysctl -w sunrpc.tcp_slot_table_entries=128 fi (modinfo sunrpc | grep tcp_max_slot_table_entries) && echo "options sunrpc tcp_max_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf (modinfo sunrpc | grep tcp_slot_table_entries) && echo "options sunrpc tcp_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
Step 2: Mount the NFS file system
NFS file systems can be manually or automatically mounted on Linux ECS instances. Manual mounting is suitable for temporary mounting. If you manually mount a NAS file system on an ECS instance, you must remount the file system every time the ECS instance is started or restarted. Automatic mounting is suitable for persistent mounting. If you enable automatic mounting for a NAS file system, you do not need to remount the file system every time the ECS instance is started or restarted. To prevent the mount information from being lost after the ECS instance is restarted, we recommend that you enable automatic mounting for a NAS file system after you manually mount the file system.
Manually mount the NFS file system
You can use the mount target of an NFS file system to mount the file system on a Linux ECS instance.
Mount the NFS file system.
To mount a General-purpose NAS file system, run one of the following commands.
NoteTo ensure optimal access performance, we recommend that you mount a file system by using NFSv3.
NFSv4.0 supports file locks, including range locks. If you need to modify a file on multiple ECS instances at the same time, we recommend that you mount a file system by using NFSv4.0.
To use NFSv3 to mount the file system, run the following command:
sudo mount -t nfs -o vers=3,nolock,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport file-system-id.region.nas.aliyuncs.com:/ /mnt
To use NFSv4 to mount the file system, run the following command:
sudo mount -t nfs -o vers=4,minorversion=0,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport file-system-id.region.nas.aliyuncs.com:/ /mnt
To mount an Extreme NAS file system, run the following command:
sudo mount -t nfs -o vers=3,nolock,noacl,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport file-system-id.region.extreme.nas.aliyuncs.com:/share /mnt
The following table describes the parameters that you can configure in the mount command.
Parameter
Description
General-purpose NAS file systems: file-system-id.region.nas.aliyuncs.com:/ /mnt
Extreme NAS file systems: file-system-id.region.extreme.nas.aliyuncs.com:/share /mnt
The format is <Domain name of a mount target>:<Name of a shared directory> <Path of a mount directory>. Replace the domain name of a mount target, the name of a shared directory, and the path of a mount directory with the actual values.
Domain name of a mount target: To view the domain name, perform the following steps: Log on to the NAS console. On the File System List page, find the file system that you want to manage and click Manage in the Actions column. On the Mount Targets tab, view the domain name of the mount target. For more information, see View the domain name of a mount target.
Name of a shared directory: specifies the root directory (/) or a subdirectory (for example, /share) of the NAS file system. If you specify a subdirectory, make sure that the subdirectory exists.
NoteThe shared directory of an Extreme NAS file system must start with /share, for example, /share and /share/subdir.
Path of a mount directory: specifies the root directory (/) or a subdirectory (for example, /mnt) of the ECS instance. If you specify a subdirectory, make sure that the subdirectory exists.
vers
The protocol version of the file system.
vers=3: uses NFSv3 to mount the file system.
vers=4: uses NFSv4 to mount the file system.
minorversion
specifies the minor version number of the protocol. NAS file systems support NFSv4.0. If you use NFSv4 to mount a NAS file system, you must set the minor version number to 0.
NoteGeneral-purpose NAS file systems support both NFSv3 and NFSv4.0.
Extreme NAS file systems support only NFSv3.
Mount options
When you mount a file system, you can specify multiple mount options. Separate multiple mount options with commas (,). The following mount options are available:
rsize: specifies the size of data blocks that the client reads from the file system. Recommended value: 1048576.
wsize: specifies the size of data blocks that the client writes to the file system. Recommended value: 1048576.
NoteTo prevent performance degradation, we recommend that you set the values of both the rsize mount option and the wsize mount option to 1048576.
hard: specifies that applications stop accessing a file system when the file system is unavailable, and wait until the file system becomes available. We recommend that you specify this mount option.
timeo: specifies the period in deciseconds (tenths of a second) for which the NFS client waits before the NFS client retries a request. Recommended value: 600. This value specifies 60 seconds.
NoteIf you want to modify the timeo mount option, we recommend that you specify 150 or a greater value. The timeo mount option is measured in deciseconds (tenths of a second). For example, the value 150 indicates 15 seconds.
retrans: specifies the number of times that the NFS client retries a request. Recommended value: 2.
noresvport: specifies that a new TCP port is used to ensure network continuity between the file system and the ECS instance when the network recovers from a failure. We recommend that you specify this mount option.
ImportantTo prevent data inconsistency, we recommend that you do not specify the soft mount option. If you specify the soft mount option, make sure that you understand the potential risks.
We recommend that you do not set any other mount options that are different from the defaults. If you change the read or write buffer sizes or disable attribute caching, the performance may be reduced.
Verify the mount result.
Command
mount -l
Sample output
If a command output that is similar to the following example appears, the mount is successful.
After the file system is mounted, you can run the
df -h
command to view the storage capacity of the file system.
If the file system fails to be mounted, troubleshoot the issue. For more information, see FAQ about troubleshooting of mount failures.
After the NAS file system is mounted, you can read data from and write data to the NAS file system on the Linux ECS instance.
You can access the file system the same way you access a local directory. The following figure shows an example.
(Optional) Automatically mount the NFS file system
You can configure the /etc/fstab
file of a Linux ECS instance to automatically mount an NFS file system when the ECS instance is restarted.
Before you enable automatic mounting, make sure that the preceding manual mounting is successful. This prevents startup failures of the ECS instance.
To mount an Extreme NAS file system, run the following command:
To mount a General-purpose NAS file system, skip this step and go to Step 2.
vi /etc/systemd/system/sockets.target.wants/rpcbind.socket
Open the /etc/systemd/system/sockets.target.wants/rpcbind.socket configuration file, and comment out the rpcbind parameters that are related to IPv6, as shown in the following figure. Otherwise, the rpcbind service fails to run at startup.
If you want to enable automatic mounting for CentOS 6.x, perform the following steps:
Run the
chkconfg netfs on
command to enable the netfs service at startup.Open the /etc/netconfig configuration file and comment out inet6-related information, as shown in the following figure.
Open the /etc/fstab configuration file to add mounting configurations.
General-purpose NAS file system
To use NFSv3 to mount the file system, run the following command:
file-system-id.region.nas.aliyuncs.com:/ /mnt nfs vers=3,nolock,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev,noresvport 0 0
To use NFSv4 to mount the file system, run the following command:
file-system-id.region.nas.aliyuncs.com:/ /mnt nfs vers=4,minorversion=0,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev,noresvport 0 0
Extreme NAS file system
file-system-id.region.extreme.nas.aliyuncs.com:/share /mnt nfs vers=3,nolock,noacl,proto=tcp,noresvport,_netdev 0 0
NoteIf you want to enable automatic mounting for CentOS 6.x, run the
chkconfg netfs on
command to enable the netfs service at startup.For more information, see Mount parameters. The following table describes the parameters that are not included in the preceding table.
Parameter
Description
_netdev
Prevents automatic mounting before the network is connected.
0 (the first value after noresvport)
Specifies whether to back up a file system by running the dump command. A non-zero value indicates that the file system is backed up. For a NAS file system, the default value is 0.
0 (the second value after noresvport)
The order in which the fsck command checks file systems at startup. For a NAS file system, the default value is 0, which indicates that the fsck command is not run at startup.
Run the following command to open the /etc/rc.local file at startup:
[ ! -f /etc/rc.local ] && echo '#!/bin/bash' > /etc/rc.local; echo "for ((i=1; i<=10; i++)); do if ping -c 1 -W 3 aliyuncs.com; then break; else sleep 1; fi; done" >> /etc/rc.local; echo "sleep 3; mount -a -t nfs" >> /etc/rc.local; chmod +x /etc/rc.local
Run the
reboot
command to restart the ECS instance.ImportantIf you restart the ECS instance, services are interrupted. We recommend that you perform the operation during off-peak hours.
Verify that automatic mounting is enabled.
You can run the
df -h
command to check the mounted NAS file system within one minute after the ECS restarts.
Appendix: How NFS caching works and the related issues
In traditional disks, all data is cached into the page cache, and modified pages are asynchronously flushed back to the server. The latency of traditional disks is low. However, in an NFS file system, NFS does not cache newly created files or newly written content into the page cache, but flushes them back to the NAS server as soon as possible. Therefore, when multiple ECS instances share an NFS file system, all NAS operations cause one more overhead than disk operations. This overhead is generally between 100 us and 1 ms. Flushing data back to the NAS server as soon as possible involves the following multi-node consistency models provided by NAS:
Timeout-based eventual consistency model
NFS caches the attribute (FileAttr) of directories or files. The operating system determines whether a directory or file has been modified on other ECS instances based on whether FileAttr has changed. Besides, after FileAttr is loaded, the operating system considers the caches (for example, the content of a file or the file list in a directory) valid within time T. After time T, the operating system obtains FileAttr from the server again. If FileAttr remains unchanged, the operating system considers all the caches related to the file or directory valid.
T is an adaptive value. Default value: 1s to 60s.
File content cache: caches the content of a file.
Subdirectory cache: caches which files exist in a directory and which files do not exist in the directory.
Example of a file content cache:
ECS-1 reads 0 to 4 KB of file X: ECS-1 reads the file content for the first time and the content does not exist in the cache. ECS-1 reads the content from the server and caches it locally.
ECS-2 updates 0 to 4 KB of file X: ECS-2 writes the data into the server and updates mtime in FileAttr.
ECS-1 reads 0 to 4 KB of file X again: If the time interval between the second time ECS-1 reads 0 to 4 KB of file X and the first time ECS-1 reads 0 to 4 KB of file X is less than time T, FileAttr has not expired. In this case, ECS-1 directly reads the 0 to 4 KB of file X in the cache.
ECS-1 reads 0 to 4 KB of file X for the third time: If the time interval between the third time ECS-1 reads 0 to 4 KB of file X and the first time ECS-1 reads 0 to 4 KB of file X is greater than time T, ECS-1 obtains the new FileAttr from the server and finds that mtime has changed. In this case, ECS-1 discards the data in the cache and reads data from the server.
Example of a subdirectory cache:
ECS-1 attempts to find /a: ECS-1 finds that a does not exist upon the first search. ECS-1 then caches the information that a does not exist in the / directory.
ECS-2 creates the /a subdirectory.
ECS-1 attempts to find /a again: If the time interval between the second time ECS-1 searches for /a and the first time ECS-1 searches for /a is less than time T, ECS-1 directly uses the cache and notifies that the subdirectory does not exist.
ECS-1 attempts to find /a for the third time: If the time interval between the third time ECS-1 searches for /a and the first time ECS-1 searches for /a is greater than time T, ECS-1 obtains the latest FileAttr of the / subdirectory and finds that mtime has changed. In this case, ECS-1 discards the data in the cache and searches for /a on the server.
For more information about the timeout-based eventual consistency model provided by NFS, see NFS.
File-based close-to-open (CTO) consistency model
The timeout-based eventual consistency model cannot ensure that ECS-2 immediately reads the data written by ECS-1. Therefore, to improve performance, NFS provides the file-based CTO consistency model. When two or more compute nodes concurrently read and write the same file, the changes made by ECS-1 may not be immediately read by ECS-2. However, once ECS-1 writes data into the file and closes the file, reopening the file on any compute node ensures access to the newly written data.
For example, a producer ECS instance produces file X and then executes the close operation. Then, the producer ECS instance sends message X to Message Queue, stating that file X has been produced. A consumer ECS instance that has subscribed to Message Queue reads message X (file X has been produced). Then, the consumer ECS instance executes the open operation on the file and reads the file through fd returned by the open operation. This way, the consumer ECS instance can definitely read all the content of file X. Assume that the consumer ECS instance has executed the open operation on file X and obtained fd before the producer ECS instance completes the file production. In this case, the consumer ECS instance may not be able to read the latest file content by directly using the fd after receiving message X.
Typical issues
File creation delayed
Issue
ECS-1 creates the abc file, but it takes some time for ECS-2 to read the abc file. The latency ranges from 1 second to 1 minute.
Cause
The issue is caused by the lookup cache, which meets the expected time T. For example, ECS-2 accesses the abc file before ECS-1 has created it, causing the issue that the file does not exist on ECS-2. As a result, a record indicating that the abc file does not exist is cached. FileAttr has not expired within time T. Therefore, when ECS-2 accesses the file again, it still accesses the record indicating that the abc file does not exist, which was cached the first time that ECS-2 accessed the file.
Solutions
To ensure that ECS-2 can read the file immediately after ECS-1 creates it, you can use one of the following solutions:
Solution 1: Disable negative lookup cache for ECS-2 so that files that do not exist are not cached. This solution causes the minimum overhead.
Add the lookupcache=positive (default value: lookupcache=all) field when you mount the file system. Run the following command to mount the file system:
sudo mount -t nfs -o vers=3,nolock,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,lookupcache=positive file-system-id.region.nas.aliyuncs.com:/ /mnt
Solution 2: Disable all caches on ECS-2. This solution results in poor performance. Select an appropriate solution based on your business requirements.
Add the actimeo=0 field when you mount the file system. Run the following command to mount the file system:
sudo mount -t nfs -o vers=3,nolock,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,actimeo=0 file-system-id.region.nas.aliyuncs.com:/ /mnt
Writing data into a file delayed
Issue
ECS-1 has updated the abc file. However, when ECS-2 reads the file immediately, the file content is still not updated.
Causes
The following two causes are involved:
Cause 1: After ECS-1 writes the abc file, ECS-1 does not flush the content immediately. Instead, it caches the content into the page cache and relies on the application layer to call the fsync or close operation.
Cause 2: File caches exist on ECS-2. Therefore, ECS 2 may not immediately obtain the latest file content from the server. For example, ECS-2 has cached the data when ECS-1 updates the abc file. As a result, the cached content is still used when ECS-2 reads the file.
Solutions
To ensure that ECS-2 can read the file immediately after ECS-1 creates it, you can use one of the following solutions:
Solution 1: Apply the file-based close-to-open (CTO) consistency model so that the read and write operations on ECS-1 or ECS-2 conform to CTO consistency. This way, ECS-2 can definitely read the latest data. Specifically, ECS-1 executes the close or fsync operation after it updates a file. ECS-2 executes the open operation before it reads the file.
Solution 2: Disable all caches on ECS-1 and ECS-2. This solution results in poor performance. Select an appropriate solution based on your business requirements.
Disable caching on ECS-1. When you mount the file system, add the noac field to ensure that all written data is immediately flushed into the disk. Run the following command to mount the file system:
sudo mount -t nfs -o vers=3,nolock,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,noac file-system-id.region.nas.aliyuncs.com:/ /mnt
NoteIf you call the fsync operation after the write operation on ECS-1 is complete or you call the sync operation to write data, replace "noac" in the preceding command with "actimeo=0" to improve the performance slightly.
noac is equivalent to
actimeo=0
plus sync. In this case, all write operations are forcibly executed by using sync.
Disable caching on ECS-2. When you mount the file system, add the actimeo=0 field to omit all caches. Run the following command to mount the file system:
sudo mount -t nfs -o vers=3,nolock,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,actimeo=0 file-system-id.region.nas.aliyuncs.com:/ /mnt