ossfs allows you to seamlessly access and manage Object Storage Service (OSS) objects in the same way as you manage local files. This topic describes new features of ossfs V1.91.2 and V1.91.3.
New features in V1.91.2
Direct read
When you use ossfs to read an object in a bucket, ossfs downloads the object from the bucket, writes it to a local disk as a file, and returns the file from the local disk to you by default. In ossfs, disk writes are asynchronous, meaning that ossfs first writes a downloaded file to the page cache in the memory and then asynchronously flushes the content of the dirty pages to the local disk. When you read a file and experience a cache hit, the file is directly served from the cache.
By default, ossfs retains all prefetched data on the local disk until the available disk space is used up.
If the memory size is big enough, all downloaded data is written to the page cache. Read performance depends only on the network bandwidth of the machine, not disk bandwidth.
If the memory size is not big enough to hold the entire file, the file is written to the local disk and served from the disk to fulfill a read request. As a result, read performance depends on disk bandwidth and network bandwidth. Of these, the former is a major performance constraint.
In default read mode, the performance of reading large files is limited by disk bandwidth. To address this performance limitation, ossfs offers the direct read mode.
In direct read mode, ossfs downloads data into a memory buffer and serves read requests from that buffer. This mode eliminates disks from the data reading process, resulting in improved sequential read performance that leverages network bandwidth.
In direct read mode, ossfs manages downloaded data in chunks. The size of each chunk is 4 MB by default and can be changed by using the direct_read_chunk_size parameter. In the memory, ossfs retains the data within the range from the previous chunk before the current chunk to the next direct_read_prefetch_chunks chunks after the current chunk. If the offset of the next read operation exceeds this range, ossfs discards the already downloaded data and starts prefetching data from the new offset.
The direct read mode is suitable only for sequential reads. It is not applicable in the following scenarios:
Random reads: ossfs retains data only in the current offset window. If the offsets of random reads do not fall within the window, data is downloaded and released multiple times, which consumes unnecessary network resources and degrades read performance.
Writes: ossfs still writes data to the local disk first. If a file write request occurs during direct read operations on the file, ossfs automatically switches to the default read mode, in which the file is first downloaded to the local disk. A write operation on a file during a direct read operation does not make other files exit from the direct read mode.
The following table describes parameters that are related to direct reads.
Parameter | Description | Default value |
direct_read | Enables the direct read mode. To enable the direct read mode, specify -odirect_read in the command line. |
|
direct_read_chunk_size | The size of data (in MB) to be prefetched by each prefetch task. | 4 |
direct_read_prefetch_chunks | The number of chunks to be prefetched. | 32 |
direct_read_prefetch_limit | The maximum size of data (in MB) that can be prefetched. | 1024 |
direct_read_prefetch_thread | The number of threads that perform prefetching. | 64 |
New features in V1.91.3
New parameter direct_read_backward_chunks
In ossfs V1.91.2, the direct read mode retains in the memory the data within the range from the previous chunk before the current chunk to the next direct_read_prefetch_chunks chunks after the current chunk. Only one chunk before the current read position is retained. If you attempt to read data more than one chunk before the current read position, a significant amount of prefetched data is discarded, which can cause additional bandwidth consumption, wasted resources, and performance degradation.
In ossfs V1.91.3, the direct_read_backward_chunks parameter is added to allow ossfs to retain in the memory the specified number of chunks before the current chunk. You can use the direct_read_backward_chunks parameter together with the direct_read_prefetch_chunks parameter to retain in the memory the data within the range from the specified number of chunks before the current chunk to the specified number of chunks after the current chunk. In AI reasoning scenarios, such as when you load Safetensors files (random reads), you can reasonably increase the value of the direct_read_backward_chunks parameter to retain more data in the memory and reduce repeated data downloads for better performance.
Parameter | Description | Default value | |
direct_read_backward_chunks | The number of chunks before the current read position that can be retained in direct read mode. The default size of a chunk is 4 MB.
| 1 | |
stat_cache_expire | The metadata validity period in seconds. Starting from this version, the parameter can be set to -1, which specifies that the metadata never expires. When metadata expires, data is reloaded to the buffer. | 900 |
Hybrid read mode
In scenarios where random reads are frequent and read offsets span a wide range:
In direct read mode, ossfs frequently downloads data, discards data, and re-downloads data. This significantly reduces read performance.
In default read mode, ossfs downloads data to the local disk and does not discard downloaded data until the disk space is used up. As a result, no repeated downloads occur.
If the size of the requested file is not big, it is written to the page cache in the memory and served to the requester directly from the page cache. This way, read performance is not restricted by disk performance.
If the size of the requested file is too big to be completely written to the page cache, read performance is restricted by disk performance.
The hybrid read mode combines the benefits of the default mode (reading data from the disk) and direct read mode. In hybrid read mode, small files take full advantage of page cache acceleration, whereas large files initially utilize page cache efficiently and are switched to the direct read mode when a defined threshold is reached. The hybrid read mode can avoid potential disk performance constraints when reading both small and large files.
Parameter | Description | Default value |
direct_read_local_file_cache_size_mb | In hybrid read mode, data is initially downloaded to the local disk by default. When the downloaded data exceeds the size threshold (in MB) specified by this parameter, the direct read mode is used. | 0 (Setting this value is equivalent to only using the direct read mode.) |
Performance testing
Sequential reading
Machine specifications
Memory: 32 GB
Disk bandwidth: 130 MB/s
Internal bandwidth: 750 MB/s
Mount commands
Mount ossfs in default read mode:
ossfs [bucket name] [mountpoint] -ourl=[endpoint] -oparallel_count=32 -omultipart_size=16
Mount ossfs in direct read mode:
ossfs [bucket name] [mountpoint] -ourl=[endpoint] -odirect_read -odirect_read_chunk_size=8 -odirect_read_prefetch_chunks=64
The following sample code provides a sample script for performance testing:
dd if=testfile of=/dev/null bs=1M status=progress
The following table provides test results.
File size
Default read mode
Direct read mode
1 GB
646 MB/s
592 MB/s
5 GB
630 MB/s
611 MB/s
10 GB
260 MB/s
574 MB/s
File loading by using PyTorch
Machine specifications
Memory: 15 GB
Disk bandwidth: 150 MB/s
Internal bandwidth: 500 MB/s
Mount commands
Mount ossfs in default read mode:
ossfs [bucket name] [mountpoint] -ourl=[endpoint] -oparallel_count=32 -omultipart_size=16
Mount ossfs in direct read mode:
ossfs [bucket name] [mountpoint] -ourl=[endpoint] -odirect_read -odirect_read_chunk_size=8 -odirect_read_prefetch_chunks=64 -odirect_read_backward_chunks=16
Mount ossfs in hybrid read mode
ossfs [bucket name] [mountpoint] -ourl=[endpoint] -oparallel_count=32 -omultipart_size=16 -odirect_read -odirect_read_chunk_size=8 -odirect_read_prefetch_chunks=64 -odirect_read_backward_chunks=16 -odirect_read_local_file_cache_size_mb=3072
Testing
The following sample code provides a sample script for performance testing:
import time from safetensors.torch import load_file file_path = "./my_folder/bert.safetensors" start = time.perf_counter() loaded = load_file(file_path) end = time.perf_counter() elapsed = end - start print("time_spent: ", elapsed)
The following table provides test results.
NoteThe test results are for reference only. The actual read performance varies with the file size and the structure of the Safetensors model.
File size
Default read mode
Direct read mode
Hybrid read mode
2.0 GB
4.00s
5.86s
3.94s
5.3 GB
20.54s
27.33s
19.91s
6.5 GB
30.14s
24.23s
17.93s