All Products
Search
Document Center

Lindorm:Storage types

Last Updated:Feb 03, 2026

Lindorm uses LindormDFS as the underlying storage. This way, storage resources in Lindorm are decoupled from computing resources. You are individually charged for the storage resources of your instance. You can scale up the storage of an instance without interrupting your business. The storage capacity of a Lindorm instance is shared among multiple engines within the instance.

Storage classes

The following table describes the storage types supported by Lindorm and the scenarios that are applicable to the storage types.

Storage type

Latency

Scenario

Supported Engine Types

Scalability

Standard storage

3 ms to 5 ms

Provides real-time data access for data streams, chat, real-time reports, and online computing.

LindormTable, LindormTSDB, LindormSearch, LindormDFS, and the Lindorm streaming engine

Optional capacity storage can be purchased.

Performance storage

0.2 ms to 0.5 ms

Low-latency data access to power ad bidding, user personas, audience segmentation, real-time search, and risk control.

LindormTable, LindormTSDB, LindormSearch, LindormDFS, and the Lindorm streaming engine

Optional capacity storage can be purchased.

Storage-intensive cloud storage

15 ms to 3s

Infrequent Access data includes monitoring logs, historical orders, audio and video archives, data lake storage, and offline computing data.

Note

Capacity storage uses high-density disk arrays to provide cost-effective storage services and support high read/write throughput. However, it delivers relatively poor random read performance. Capacity storage is suitable for scenarios in which many write requests and a small number of read requests are processed or big data analytics scenarios. For more information, see Capacity storage read throttling.

LindormTable, LindormDFS, and the Lindorm streaming engine

N/A

Local SSDs

0.1 ms to 0.3 ms

It supports online businesses such as online games, E-commerce, ApsaraVideo Live, and media by meeting the low-latency and high I/O performance requirements of I/O-intensive applications for block storage.

LindormTable, LindormTSDB, search engine, and file engine

Note

When you purchase an instance, if you set Storage Type to Local SSD, you can only select Node Spec of Local Disk and the number of data engine nodes.

  • Optional capacity storage can be purchased.

  • Local SSDs can be pooled together with attached cloud disks.

  • Heterogeneous replicas are supported.

  • Erasure coding that uses 1.5 replicas is supported.

Local HDDs

10 ms to 300 ms

This is applicable to business scenarios in industries, such as the Internet and finance, that involve massive data storage, offline computing, and big data analysis.

LindormTable, LindormTSDB, LindormSearch, and LindormFile

Note

When you purchase an instance and set the Storage Type to Local SSD, you can only select the Node Spec of Local Disk and the number of nodes for the data engine.

  • The access to attached cloud disks can be accelerated.

  • Heterogeneous replicas are supported.

  • Erasure coding that uses 1.5 replicas is supported.

Important
  • Latency refers to only the storage latency and does not refer to the end-to-end latency.

  • By default, local SSDs and HDDs store three replicas of data for redundancy. To ensure that three data replicas are stored for redundancy when one node fails, you must configure at least three nodes for a Lindorm instance that uses local disks.

  • The usage of cloud storage and local disks is measured in different methods.

    • The usage of performance storage, standard storage, and capacity storage is measured by logical capacity. For example, if the logical size of a database file is 100 GiB, the capacity that is used to store the file in cloud storage is 100 GiB. The availability and reliability of the data is ensured by LindormDFS. You do not need to calculate the number of data replicas when you plan the storage capacity.

    • The usage of local SSDs, local HDDs, and attached cloud disks is measured by physical capacity. You must calculate the number of data replicas when you plan the storage capacity. For example, if the logical size of a database file is 100 GiB and three replicas of the file are stored in the local HDDs of the Lindorm instance, the capacity that is used to store the file in local HDDs is 300 GiB. The availability and reliability of the data is ensured by the multiple replicas generated by LindormDFS. By default, three replicas are generated for data stored in local disks and two replicas are generated for data stored in cloud disks for data redundancy.

Extension capabilities

Scalability

Description

Optional capacity storage can be purchased.

You can purchase additional capacity storage to store cold data.

Local SSDs can be pooled together with attached cloud disks.

The storage capacity of a single compute node that uses local SSDs is too small to meet the storage requirements of large-scale businesses. However, if you purchase more compute nodes for larger storage capacity, the computing resources may be wasted. You can attach cloud disks to a Lindorm instance that uses local SSDs. In this case, the local SSDs of the instance and the attached cloud disks can be used together as a storage pool.

The access to attached cloud disks can be accelerated.

You can attach cloud disks to a Lindorm instance that uses local HDDs. Cloud disks can provide a lower average latency and higher IOPS compared with lock HDDs. You can separately use the attached cloud disks to store hot data or use the attached cloud disks together with local HDDs to store heterogeneous replicas.

Heterogeneous Replica

Lindorm lets you use high-performance storage medium and cost-effective storage together to store the heterogeneous replicas of a data file. This way, less high-performance storage capacity is used and the storage costs can be reduced. In normal cases, read requests access data replicas stored in high-performance storage for better experience. If the nodes that use high-performance storage is not available, read requests access data replicas stored in cost-effective storage for data availability and reliability. Heterogeneous replicas are suitable for scenarios in which high performance is required and request glitches are acceptable.

Lindorm supports the following combinations of high-performance and cost-effective storage media for heterogeneous replicas:

  • One replica in local SSDs or cloud disks + one replica in capacity storage

  • One replica in cloud disks + two replicas in local HDDs

Note

To activate heterogeneous replicas, contact the technical support of Lindorm (DingTalk ID: s0s3eg3).

Erasure coding that uses 1.5 replicas is supported.

You can enable erasure coding that uses 1.5 replicas for Lindorm instances that use local SSDs or HDDs. After you enable this feature for an instance, the redundancy for data replicas in the instance is reduced from 3 to 1.5. By default, Lindorm uses the RS-4-2 algorithm for data redundancy.

For example, if you enable erasure coding that uses 1.5 replicas for an instance, the replicas of data are separately stored on six storage nodes and a redundant storage node is required to ensure data availability. In this case, you must configure at least seven storage nodes for the instance.

Note

To enable erasure coding that uses 1.5 replicas, contact the technical support of Lindorm (DingTalk ID: s0s3eg3).