This topic provides answers to some commonly asked questions about using ossfs.
Introduction
An ossfs error includes a message that can help you identify and resolve issues. For example, you can enable the debug logging feature to resolve socket connection failures or errors with HTTP status code 4xx or 5xx.
Errors with HTTP status code 403 occur when access is denied due to a lack of the required access permissions.
Errors with HTTP status code 400 occur due to incorrect requests.
Errors with HTTP status code 5xx occur due to network jitters and server errors.
ossfs provides the following features:
ossfs mounts remote Object Storage Service (OSS) buckets to local disks. We recommend that you do not use ossfs to manage business applications that require high read and write performance.
ossfs operations are not atomic, which means that an operation may succeed locally but fail remotely on OSS.
If ossfs cannot meet your business requirements, you can use ossutil.
Insufficient permissions
What do I do if HTTP status code 403 is returned when I run the touch command on an object in a mounted bucket?
Analysis: HTTP status code 403 is returned when the operation is unauthorized. The error may occur in the following scenarios:
The storage class of the object is Archive.
The AccessKey pair used does not have the required permissions to manage the bucket.
Solutions:
Restore the Archive object or enable real-time access of Archive objects for the bucket in which the Archive object is stored.
Grant the required permissions to the Alibaba Cloud account that uses the AccessKey pair.
What do I do if the "Operation not permitted" error message appears when I run the rm command to delete an object?
Analysis: When you run the rm command to delete an object, the DeleteObject operation is called to delete the object. If you mount a bucket by using a RAM user, check whether the RAM user has the permissions to delete the object.
Solution: Grant the RAM user the required permissions to delete objects. For more information, see RAM policies and Common examples of RAM policies.
What do I do if the "The bucket you are attempting to access must be addressed using the specified endpoint" error message appears when I access a bucket?
Analysis: This error message appears because you do not use the correct endpoint to access the bucket. This error message may appear in the following scenarios:
The bucket and endpoint do not match.
The UID of the bucket owner is different from that of the Alibaba Cloud account to which the AccessKey pair belongs.
Solution: Check whether the configurations are correct and modify the configurations if necessary.
Mount errors
What do I do if the "ossfs: unable to access MOUNTPOINT /tmp/ossfs: Transport endpoint is not connected" error message appears when I use ossfs to mount a bucket?
Analysis: This error message appears because the destination directory of the OSS bucket is not created.
Solution: Create the destination directory and then mount the bucket.
What do I do if the "fusermount: failed to open current directory: Permission denied" error message appears when I use ossfs to mount a bucket?
Analysis: This error message appears due to a bug in fuse that requires you to have the read permissions on the current directory instead of the destination directory of the OSS bucket.
Solution: Run the cd command to switch to a directory on which you have the read permissions, and use ossfs to mount the bucket.
What do I do if the "ossfs: Mountpoint directory /tmp/ossfs is not empty. if you are sure this is safe, can use the 'nonempty' mount option" error message appears when I use ossfs to mount a bucket?
Analysis: By default, ossfs can mount an OSS bucket only to an empty directory. This error message appears when ossfs attempts to mount a bucket to a directory that is not empty.
Solution: Switch to an empty directory and re-mount the bucket. If you still want the bucket to be mounted to the current directory, use the -ononempty option.
What do I do if the "ops-nginx-12-32 s3fs[163588]: [tid-75593]curl.cpp:CurlProgress(532): timeout now: 1656407871, curl_times[curl]: 1656407810, readwrite_timeout: 60" error message appears when I mount a bucket?
Analysis: The mount operation timed out.
Solution: ossfs uses the readwrite_timeout option to specify the timeout period for read or write requests. Unit: seconds. Default value: 60. You must increase the value of this option based on your business scenario.
What do I do if the "ossfs: credentials file /etc/passwd-ossfs should not have others permissions" error message appears when I mount a bucket?
Analysis: The permissions on the /etc/passwd-ossfs file are incorrectly configured.
Solution: The /etc/passwd-ossfs file contains access credentials. You need to prevent other users from accessing the file. To resolve this issue, modify the permissions on the file by running the chmod 640 /etc/passwd-ossfs
command.
What do I do if the "operation not permitted" error message appears when I run the ls command to list objects in a directory after I mount a bucket?
Analysis: The file system has strict limits on object and directory names. This error message appears when the names of objects in the bucket contain invisible characters.
Solution: Rename the objects appropriately and then run the ls command. The objects in the directory are displayed.
What do I do if the "fuse: device not found, try 'modprobe fuse'" error message appears?
Analysis: When you use ossfs to perform a mount operation in Docker, the "fuse: device not found, try 'modprobe fuse'" error message commonly appears because the Docker container does not have the required access permissions or the permissions to load the fuse kernel module.
Solution: When you use ossfs in a Docker container, specify the --privileged=true
parameter to run the Docker container in privileged mode, so that processes in the container have capabilities that the host has, such as using the FUSE file system. The following sample command provides an example on how to run a Docker container with the --privileged
flag:
docker run --privileged=true -d your_image
Costs reduction
What do I do to prevent unnecessary charges due to object scanning by background programs when I use ossfs to mount a bucket to an ECS instance?
Analysis: When background programs scan a directory to which ossfs mounted a bucket, a request is sent to OSS. If a large number of requests are sent, you are charged for the requests.
Solution: Use the auditd tool to check the background programs that scan the directory to which ossfs mounted the bucket. Perform the following steps:
Install and start auditd.
sudo apt-get install auditd sudo service auditd start
Set the directory to which ossfs mounted the bucket to the directory that you want to monitor. For example, run the following command to monitor the /mnt/ossfs directory:
auditctl -w /mnt/ossfs
Check the audit log to view the background programs that scanned the directory.
ausearch -i | grep /mnt/ossfs
Specify parameters to skip scheduled scans.
For example, if the updatedb program scanned the directory, you can use /etc/updatedb.conf to skip the scans performed by the program. Steps:
Add
fuse.ossfs
toRUNEFS =
.Add the directory name to
PRUNEPATHS =
.
Disk memory errors
How do I resolve occasional disconnections from ossfs?
Analysis:
If you enable the debug logging feature and specify the -d -o f2 parameter, ossfs writes logs to
/var/log/message
.After analyzing the logs, you find that ossfs requests a large amount of memory for the listbucket and listobject operations. This triggers an out of memory (OOM) error.
NoteThe listobject operation sends an HTTP request to OSS to obtain object metadata. If you have a large number of objects, running the ls command requires a large amount of memory to obtain the object metadata.
Solutions:
Specify the -omax_stat_cache_size=xxx parameter to increase the size of stat cache. The object metadata is stored in the local cache. In this case, the first run of the ls command is slow, but subsequent runs of the command are fast. The default value of this parameter is 1000. The metadata of 1,000 objects consumes approximately 4 MB of memory. You can change the value based on the memory size of your machine.
ossfs writes a large number of files in TempCache during read or write operations, which is similar to NGINX. This may result in insufficient disk space. After ossfs exits, temporary files are automatically deleted.
Use ossutil instead of ossfs. You can use ossfs for business applications that do not require high real-time performance. We recommend that you use ossutil for business applications that require high reliability and stability.
Why does ossfs occupy the full storage capacity of a disk?
Cause: To improve performance, ossfs uses the disk to save temporary data that is uploaded or downloaded by default. In this case, the storage capacity of the disk may be exhausted.
Solution: Use the -oensure_diskfree option to specify a reserved storage capacity for the disk. For example, if you want to specify a reserved storage capacity of 20 GB, run the following command:
ossfs examplebucket /tmp/ossfs -o url=http://oss-cn-hangzhou.aliyuncs.com -oensure_diskfree=20480
Why does the storage capacity of a disk change to 256 TB when the df command is run after ossfs mounts a bucket?
The storage capacity of a disk that is displayed when the df command is run does not indicate the actual storage capacity of the OSS bucket. Size (the total storage capacity of a disk) and Avail (the available storage capacity of a disk) are fixed at 256 TB, and Used (the used storage capacity of a disk) is fixed at 0 TB.
The storage capacity of an OSS bucket is unlimited. The used storage capacity varies based on your actual storage usage. For more information about bucket usage, see View the resource usage of a bucket.
What do I do if the "input/output error" error message appears when I run the cp command to copy data?
Analysis: This error message appears when system disk errors are captured. You can check whether heavy read and write loads exist on the disk.
Solution: Specify multipart parameters to manage the read and write operations on objects. You can run the ossfs -h command to view the multipart parameters.
What do I do if the "input/output error" error message appears when I use rsync for data synchronization?
Analysis: This error message appears when ossfs is used with rsync. In this example, the cp command is run to copy a large object that is 141 GB in size. This causes heavy read and write loads on the disk.
Solution: Use ossutil to download OSS objects to a local Elastic Compute Service (ECS) instance or upload objects from a local device to an ECS instance by performing multipart upload.
What do I do if the "There is no enough disk space for used as cache(or temporary)" error message appears when I use ossfs to upload a large object to OSS?
Cause:
The available disk space is less than the size that is specified by multiplying the values of
multipart_size and parallel_count
.multipart_size indicates the part size (default unit: MB). parallel_count indicates the number of parts that you want to upload in parallel (default value: 5).
Analysis:
By default, ossfs uploads large objects by using multipart upload. During the upload, ossfs writes the temporary cache file to the /tmp directory. Before ossfs writes the temporary cache file, it checks whether the available space of the disk in which the /tmp directory is located is less than the size specified by multiplying the values of
multipart_size and parallel_count
. If the available space of the disk is greater than the size specified by multiplying the values ofmultipart_size and parallel_count
, the temporary cache file is written as expected. If the available disk space is less than the size specified by multiplying the values ofmultipart_size and parallel_count
, the system reports that the available disk space is insufficient.For example, the available space of the disk is 300 GB and the size of the object that you want to upload is 200 GB, but multipart_size is set to 100000 MB (100 GB) and the number of parts that you want to upload in parallel is set to 5 (default value). In this case, ossfs determines that the size of the object that you want to upload is 500 GB (100 GB × 5). The size is greater than the available space of the disk.
Solution:
If the number of parts that you want to upload in parallel remains at the default value of 5, specify a valid value for multipart_size:
For example, if the available space of the disk is 300 GB and the size of the object that you want to upload is 200 GB, set multipart_size to 20.
For example, if the available space of the disk is 300 GB and the size of the object that you want to upload is 500 GB, set multipart_size to 50.
Version dependency errors
What do I do if the "fuse: warning: library too old, some operations may not work" error message appears when I install ossfs?
Analysis: In most cases, this error message appears because you manually installed libfuse, and the libfuse version used for ossfs compiling is later than the version that is linked to ossfs during runtime. The ossfs installation package provided by Alibaba Cloud contains libfuse 2.8.4. When you install ossfs in CentOS 5.x or CentOS 6.x, this error message appears when libfuse 2.8.3 already exists in the system and is linked to ossfs.
You can run the ldd $(which ossfs) | grep fuse command to check the fuse version that is linked to ossfs during runtime. If /lib64/libfuse.so.2 is returned in the command output, you can run the ls -l /lib64/libfuse* command to check the fuse version.
Solution: Link ossfs to the correct fuse version.
Run the rpm -ql ossfs | grep fuse command to find the directory of libfuse.
If /usr/lib/libfuse.so.2 is returned in the command output, run the LD_LIBRARY_PATH=/usr/lib ossfs... command to run ossfs.
What do I do if the error message shown in the following figure appears when I install fuse?
Analysis: This error message appears because the version of fuse does not meet the requirements of ossfs.
Solution: Download and install the latest version of fuse. Do not use YUM to install fuse. For more information, visit libfuse.
What do I do if the "input/output error" error message appears when I run the Is command to list objects?
Cause: In most cases, this error message appears in CentOS, with the NSS error -8023
error code in the error log. A communication problem occurs when ossfs uses libcurl to communicate over HTTPS. The communication problem may be caused by a too low version of the Network Security Services (NSS) library set that libcurl relies on.
Solution: Run the following command to upgrade the NSS library set:
yum update nss
What do I do if the "conflicts with file from package fuse-devel" error message appears when I use yum/apt-get to install ossfs?
Analysis: This error message appears because an earlier version of fuse exists in the system and conflicts with the dependency version of ossfs.
Solution: Use a package manager to uninstall fuse and reinstall ossfs.
Other errors
What do I do if the value of the Content-Type parameter of the objects that are uploaded to OSS by using ossfs is application/octet-stream?
Analysis: When you upload an object by using ossfs, ossfs queries the /etc/mime.types file to specify the Content-Type parameter for the object. If the /etc/mime.types file does not exist, the Content-Type parameter is set to application/octet-stream.
Solution: Check whether the /etc/mime.types file exists. If the file does not exist, add the file.
Automatically add the mime.types file
Ubuntu
Run the sudo apt-get install mime-support command to add the file.
CentOS
Run the sudo yum install mailcap command to add the file.
Manually add the mime.types file
Create the mime.types file.
vi /etc/mime.types
Add the desired content type in the
application/javascript js
format. Each line supports one type.
Remount the bucket.
Why does ossfs recognize a directory as a regular object?
Scenario 1:
Analysis: If you set content-type to text/plain when you create a directory, which is an object whose name ends with
/
, ossfs recognizes the object as a regular object.Solution: Specify the -ocomplement_stat parameter when you perform the mount operation. If the size of the object is zero bytes or one byte, ossfs recognizes it as a directory.
Scenario 2:
Analysis: Run the
ossutil stat directory name (ending with '/')
command, such asossutil stat oss://[bucket]/folder/
to view the command output.View the Content-Length field of the object, which is the size of the object. If the size of the object is not zero bytes, ossfs recognizes it as an object.
Solution: If you no longer need the directory, you can run the
ossutil rm oss://[bucket]/folder/
command to delete it (the objects in the directory are not deleted), or use ossutil to upload an object that has the same name and whose size is zero bytes to overwrite the directory.If the size of the object is zero bytes, view the Content-Type field, which is the object attribute. If the Content-Type field is not
application/x-directory
,httpd/unix-directory
,binary/octet-stream
, orapplication/octet-stream
, ossfs recognizes it as an object.Solution: Run the
ossutil rm oss://[bucket]/folder/
command to delete the object (the objects in the directory are not deleted).
What do I do if ossfs fails to perform the mv operation on an object?
Cause: The source object may be in one of the following storage classes: Archive, Cold Archive, and Deep Cold Archive.
Solution: Before you perform the mv operation on an Archive, Cold Archive, or Deep Cold Archive object, restore the object first. For more information, see Restore objects.
Can I mount a bucket on Windows by using ossfs?
No, you cannot mount a bucket on Windows by using ossfs. You can use Rclone to mount a bucket on Windows. For more information, see Rclone.
Why is the object information such as the size of an object displayed by using ossfs different from that displayed by using other tools?
Analysis: By default, ossfs caches object metadata, such as the size and access control list (ACL). Metadata caching accelerates object access by eliminating the need to send requests to OSS every time the Is command is run. However, if the user modifies the object metadata by using tools such as OSS SDKs, the OSS console, or ossutil, the changes are not synchronized to ossfs due to metadata caching. As a result, the metadata that is displayed by using ossfs is different from the metadata that is displayed by using other tools.
Solution: Set the -omax_stat_cache_size parameter to 0 to disable the metadata caching feature. In this case, each time you run the ls command, a request is sent to OSS to obtain the latest object metadata each time.
Why does ossfs require a long period of time to mount a versioning-enabled bucket?
Cause: By default, ossfs lists objects by calling the ListObjects (GetBucket) operation. If versioning is enabled for a bucket, and the bucket contains one or more previous versions of objects and a large number of expired delete markers, the response speed decreases when you call the ListObjects (GetBucket) operation to list current object versions. In this case, ossfs requires a long period of time to mount a versioning-enabled bucket.
Solution: Use the -olistobjectsV2 option to allow ossfs to call the ListObjectsV2(GetBucketV2) operation which has better object listing performance.
How do I mount a bucket over HTTPS by using ossfs?
You can use ossfs to mount a bucket over HTTPS. In this example, the China (Hangzhou) region is used. You can run the following command to mount a bucket over HTTPS:
ossfs examplebucket /tmp/ossfs -o url=https://oss-cn-hangzhou.aliyuncs.com
Why does the ls command run very slow when the directory contains a large number of objects?
Analysis: If a directory contains N objects, OSS HTTP requests must be initiated N times to run the ls command to list the N objects in the directory. This can cause serious performance issues if the number of objects is large.
Solution: Increase the stat cache size by specifying the -omax_stat_cache_size
parameter. This way, the first run of the ls command is slow, but subsequent runs of the command are fast because metadata is stored in the local cache. For versions earlier than ossfs 1.91.1, the default value of this parameter is 1000. For ossfs 1.91.1 and later, the default value of this parameter is 10000. The metadata of 10,000 objects consumes approximately dozens of MB of memory. You can change the value based on the memory size of your machine.
What do I do if the "fusermount: failed to unmount /mnt/ossfs-bucket: Device or resource busy" error message appears?
Analysis: A process is accessing objects in the /mnt/ossfs-bucket directory. As a result, the bucket cannot be unmounted.
Solution:
Use
lsof /mnt/ossfs-bucket
to find the process that is accessing the directory.Run the kill command to stop the process.
Use
fusermount -u /mnt/ossfs-bucket
to unmount the bucket.