Tair DRAM-based instances are suitable for scenarios that involve a large number of highly concurrent read and write operations on hot data and require higher performance than what Redis Open-Source Edition instances can provide. Compared with Redis Open-Source Edition instances, DRAM-based instances provide more benefits, including enhanced multi-threading performance and integration of multiple extended data structures.
Benefits
Item
Description
Item
Description
Compatibility
DRAM-based instances are fully compatible with native Redis, requiring no changes to business code. They offer compatibility with Redis 7.0, Redis 6.0, and Redis 5.0.
Performance
DRAM-based instances use the multi-threading model and provide three times the performance of Redis Open-Source Edition instances that have the same specifications. This eliminates the performance limits on high-frequency read and write requests for hot data.
Compared with native Redis databases, DRAM-based instances can process a large number of queries per second (QPS) with higher performance at higher speeds.
DRAM-based instances ensure stable performance in high-concurrency scenarios and mitigate connection issues that are caused by traffic spikes during peak hours.
DRAM-based instances run full data synchronization tasks and incremental data synchronization tasks in I/O threads to accelerate synchronization.
Deployment architectures
Performance-enhanced instances support the standard, cluster, and read/write splitting architecture.
DRAM-based instances support SSL encryption for enhanced data security.
DRAM-based instances support Transparent Data Encryption (TDE). TDE can be used to encrypt and decrypt Redis Database (RDB) files to ensure data security.
Scenarios
Tair DRAM-based instances are suitable for scenarios such as live streaming, flash sales, and online education. The following section describes typical scenarios:
Scenario 1: During flash sales, the QPS on some cached hotkeys may exceed 200,000. Redis Open-Source Edition instances cannot meet this requirement.
Standard DRAM-based instances can efficiently handle requests during flash sales without performance issues.
Scenario 2: Redis Open-Source Edition cluster instances have limits on database transactions and Lua scripts.
DRAM-based instances provide high performance and eliminate the limits on the usage of commands in Redis Open-Source Edition cluster instances.
Scenario 3: You have created a self-managed Redis cluster that consists of one master node and multiple replica nodes. The number of replica nodes and O&M costs increase as your workloads increase.
DRAM-based instances that use the read/write splitting architecture can provide one data node and up to five read replicas to help you handle millions of QPS.
Scenario 4: You have created a self-managed Redis cluster to handle tens of millions of QPS. The number of data shards and O&M costs increase as your workloads increase.
DRAM-based cluster instances can downsize clusters by two thirds and significantly reduce O&M costs.
Comparison between threading models
Threading model
Description
Threading model
Description
Redis Open-Source Edition instances and native Redis databases use the single-threading model. During request handling, Redis Open-Source Edition instances and native Redis databases must perform the following steps: read requests, parse requests, process data, and then send responses. In this scenario, network I/O operations and request parsing consume most of the available resources.
To improve performance, each Tair DRAM-based instance runs on multiple threads to process the tasks in these steps in parallel.
I/O threads are used to read requests, send responses, and parse commands.
Worker threads are used to process commands and timer events.
Auxiliary threads are used to monitor the heartbeat and status of nodes to ensure high availability.
Each DRAM-based instance reads and parses requests in I/O threads, places the parsed requests as commands in a queue, and then sends these commands to worker threads. Then, the worker threads run the commands to process the requests and send the responses to I/O threads by using a different queue.
A Tair DRAM-based instance supports up to four concurrent I/O threads. Unlocked queues and pipelines are used to transmit data between I/O threads and worker threads to improve multi-threading performance.
Note
The multi-threading model provides significant performance improvements for common data structures such as String, List, Set, Hash, Zset, HyperLogLog, and Geo and extended data structures.
Pub/Sub and blocking API operations are replicated in worker threads. This optimization accelerates the API operations to increase throughput, resulting in approximately 50% improvement in performance.
Transactions and Lua scripts are designed to be executed in a sequential order. Therefore, they do not benefit from the multi-threading model.
Note
The multi-threading model of Redis 6.0 consumes large amounts of CPU resources to deliver performance that is up to twice higher than that delivered by the single-threading model of a major version earlier than Redis 6.0. The Real Multi-I/O model of DRAM-based instances provides fully accelerated I/O threads to sustain a large number of concurrent connections and offer a linear increase in throughput.
Performance comparison
Redis Open-Source Edition instances use the same single-threading model as native Redis databases. In the single-threading model, each data node supports 80,000 to 100,000 QPS. Tair DRAM-based instances use the multi-threading model, which allows the I/O, worker, and auxiliary threads to process requests in parallel. Each data node of a DRAM-based instance delivers performance that is approximately three times that delivered by each data node of a Redis Open-Source Edition instance. The following table describes the comparison between Redis Open-Source Edition instances and Tair DRAM-based instances of different architectures and their use cases.
Architecture
Redis Open-Source Edition instances
TairDRAM-based instances
Architecture
Redis Open-Source Edition instances
TairDRAM-based instances
Standard architecture
These instances are not suitable if the QPS that is required on a single node exceeds 100,000.
These instances are suitable if the QPS that is required on a single node exceeds 100,000.
Cluster architecture
A cluster instance consists of multiple data nodes. Each data node provides performance that is similar to that of a standard instance. If a data node stores hot data and receives a large number of concurrent requests for the hot data, the read and write operations on other data that is stored on the data node may be affected. As a result, the performance of the data node deteriorates.
These instances provide high performance to read and write hot data at reduced maintenance costs.
Read/write splitting architecture
These instances provide high read performance and are suitable for scenarios in which the number of read operations is greater than the number of write operations. However, these instances cannot support a large number of concurrent write operations.
These instances provide high read performance and can support a large number of concurrent write operations. These instances are suitable for scenarios in which a large number of write operations need to be processed but the number of read operations is greater than the number of write operations.
Integration of multiple Redis modules
Similar to open source Redis, Redis Open-Source Edition supports a variety of data structures such as String, List, Hash, Set, Sorted Set, and Stream. These data structures are sufficient to support common development workloads but not sophisticated workloads. To manage sophisticated workloads, you must modify your application data or run Lua scripts.
After you enable the data flashback feature for a Tair instance, Tair retains append-only file (AOF) backup data for up to seven days. During the retention period, you can specify a point in time that is accurate to the second to create an instance and restore the backup data at the specified point in time to the new instance.
After you enable the proxy query cache feature, the configured proxy nodes cache requests and responses for hotkeys. If the same requests are received from a client within a specific validity period, Tair retrieves the responses to the requests from the cache and returns the responses to the client. During this process, Tair does not need to interact with backend data shards. For more information, see Use proxy query cache to address issues caused by hotkeys.
Global Distributed Cache for Tair is an active geo-redundancy database system that is developed based on Redis Open-Source Edition. Global Distributed Cache supports business scenarios in which multiple sites in different regions provide services at the same time. It helps enterprises replicate the active geo-redundancy architecture of Alibaba.
Q: What do I do if a client does not support the commands that are provided by new modules?
You can define the commands from the new data module in your application code before you use the commands in your client. You can also use a client that provides built-in support for these commands. For more information, see Clients.