All Products
Search
Document Center

Tair (Redis® OSS-Compatible):DRAM-based instances

Last Updated:Oct 24, 2024

Tair DRAM-based instances are suitable for scenarios that involve a large number of highly concurrent read and write operations on hot data and require higher performance than what Redis Open-Source Edition instances can provide. Compared with Redis Open-Source Edition instances, DRAM-based instances provide more benefits, including enhanced multi-threading performance and integration of multiple extended data structures.

Benefits

Item

Description

Compatibility

  • DRAM-based instances are fully compatible with native Redis, requiring no changes to business code. They offer compatibility with Redis 7.0, Redis 6.0, and Redis 5.0.

Performance

  • DRAM-based instances use the multi-threading model and provide three times the performance of Redis Open-Source Edition instances that have the same specifications. This eliminates the performance limits on high-frequency read and write requests for hot data.

  • Compared with native Redis databases, DRAM-based instances can process a large number of queries per second (QPS) with higher performance at higher speeds.

  • DRAM-based instances ensure stable performance in high-concurrency scenarios and mitigate connection issues that are caused by traffic spikes during peak hours.

  • DRAM-based instances run full and incremental data synchronization tasks in I/O threads to accelerate synchronization.

Deployment architectures

  • DRAM-based instances support the standard, cluster, and read/write splitting architectures.

Integration of multiple data modules

Enterprise-grade features

Data security

  • DRAM-based instances support SSL encryption for enhanced data security. For more information about SSL encryption, see Configure SSL encryption.

  • DRAM-based instances support transparent data encryption (TDE). TDE can be used to encrypt and decrypt Redis Database (RDB) files to ensure data security. For more information about TDE, see Enable TDE.

Scenarios

Tair DRAM-based instances are suitable for scenarios such as live streaming, flash sales, and online education. The following section describes typical scenarios:

  • Scenario 1: During flash sales, the QPS on some cached hotkeys may exceed 200,000. ApsaraDB for Redis Community Edition instances cannot meet this requirement.

    Tair DRAM-based instances can efficiently handle requests during flash sales without performance issues.

  • Scenario 2: ApsaraDB for Redis Community Edition cluster instances have limits on database transactions and Lua scripts.

    Tair DRAM-based instances provide high performance and eliminate the limits on the usage of commands in ApsaraDB for Redis Community Edition cluster instances.

  • Scenario 3: You have created a self-managed Redis instance that consists of one master node and multiple replica nodes. The number of replica nodes and O&M costs increase as your workloads increase.

    Tair DRAM-based instances that use the read/write splitting architecture can provide one data node and up to five read replicas to help you handle millions of QPS.

  • Scenario 4: You have created a self-managed Redis cluster to handle tens of millions of QPS. The number of data shards and O&M costs increase as your workloads increase.

    Tair DRAM-based instances can downsize clusters by two thirds and significantly reduce O&M costs.

Comparison between threading models

Threading model

Description

Figure 1. Single-threading model of Redis 标准性能Redis实例的单线程模型

ApsaraDB for Redis Community Edition instances and native Redis databases use the single-threading model. During request handling, native Redis databases and ApsaraDB for Redis Community Edition instances must perform the following steps: read requests, parse requests, process data, and then send responses. In this scenario, network I/O operations and request parsing consume most of the available resources.

Figure 2. Multi-threading model of Tair 增强性能Redis实例的多线程模型

To improve performance, each Tair DRAM-based instance runs on multiple threads to process the tasks in these steps in parallel.

  • I/O threads are used to read requests, send responses, and parse commands.

  • Worker threads are used to process commands and timer events.

  • Auxiliary threads are used to monitor the heartbeat and status of nodes to ensure high availability.

Each DRAM-based instance reads and parses requests in I/O threads, places the parsed requests as commands in a queue, and then sends these commands to worker threads. Then, the worker threads run the commands to process the requests and send the responses to I/O threads by using a different queue.

A Tair DRAM-based instance supports up to four concurrent I/O threads. Unlocked queues and pipelines are used to transmit data between I/O threads and worker threads to improve multi-threading performance.

Note
  • The multi-threading model provides significant performance improvements for common data structures such as String, List, Set, Hash, Zset, HyperLogLog, and Geo and extended data structures.

  • Pub/Sub and blocking API operations are replicated in worker threads. This optimization accelerates the API operations to increase throughput, resulting in approximately 50% improvement in performance.

  • Transactions and Lua scripts are designed to be executed in a sequential order. Therefore, they do not benefit from the multi-threading model.

Note

The multi-threading model of Redis 6.0 consumes large amounts of CPU resources to deliver performance that is up to twice higher than that delivered by the single-threading model of a major version earlier than Redis 6.0. The Real Multi-I/O model of DRAM-based instances provides fully accelerated I/O threads to sustain a large number of concurrent connections and offer a linear increase in throughput.

Performance comparison

ApsaraDB for Redis instances use the same single-threading model as native Redis databases. In the single-threading model, each data node supports 80,000 to 100,000 QPS. Tair DRAM-based instances use the multi-threading model, which allows the I/O, worker, and auxiliary threads to process requests in parallel. Each data node of a DRAM-based instance delivers performance that is approximately three times that delivered by each data node of an ApsaraDB for Redis Community Edition instance. The following table describes the comparison between ApsaraDB for Redis instances and Tair DRAM-based instances of different architectures and their use cases.

Architecture

ApsaraDB for Redis instances

Tair DRAM-based instances

Standard architecture

These instances are not suitable if the number of QPS that is required on a single node exceeds 100,000.

These instances are suitable if the number of QPS that is required on a single node exceeds 100,000.

Cluster architecture

A cluster instance consists of multiple data nodes. Each data node provides performance that is similar to that of a standard instance. If a data node stores hot data and receives a large number of concurrent requests for the hot data, the read and write operations on other data that is stored on the data node may be affected. As a result, the performance of the data node deteriorates.

These instances provide high performance to read and write hot data at reduced maintenance costs.

Read/write splitting architecture

These instances provide high read performance and are suitable for scenarios in which the number of read operations is greater than the number of write operations. However, these instances cannot support a large number of concurrent write operations.

These instances provide high read performance and can support a large number of concurrent write operations. These instances are suitable for scenarios in which a large number of write operations need to be processed but the number of read operations is greater than the number of write operations.

Integration of multiple data modules

Similar to open source Redis, Redis Open-Source Edition supports a variety of data structures such as String, List, Hash, Set, Sorted Set, and Stream. These data structures are sufficient to support common development workloads but not sophisticated workloads. To manage sophisticated workloads, you must modify your application data or run Lua scripts.

DRAM-based instances integrate multiple in-house Tair modules to expand the applicable scope of Tair. These modules include exString (including commands that enhance Redis string functionality), exHash, exZset, GIS, Bloom, Doc, TS, Cpc, Roaring, Search, and Vector. These modules simplify business development in complex scenarios and allow you to focus on your business innovation.

Note
  • DRAM-based instances that are compatible with Redis 7.0 or 6.0 support all the preceding data structures.

  • DRAM-based instances that are compatible with Redis 5.0 support all the preceding data structures other than TairVector.

Enterprise-grade features

Enterprise-grade feature

Description

Data flashback for data restoration by point in time

After you enable the data flashback feature for a Tair instance, Tair retains append-only file (AOF) backup data for up to seven days. During the retention period, you can specify a point in time that is accurate to the second to create an instance and restore the backup data at the specified point in time to the new instance.

Proxy query cache

After you enable the proxy query cache feature, the configured proxy nodes cache requests and responses for hotkeys. If the same requests are received from a client within a specific validity period, Tair retrieves the responses to the requests from the cache and returns the responses to the client. During this process, Tair does not need to interact with backend data shards. For more information, see Use proxy query cache to address issues caused by hotkeys.

Global Distributed Cache

Global Distributed Cache for Tair is an active geo-redundancy database system that is developed based on Redis Open-Source Edition. Global Distributed Cache supports business scenarios in which multiple sites in different regions provide services at the same time. It helps enterprises replicate the active geo-redundancy architecture of Alibaba.

Two-way data synchronization by using DTS

Data Transmission Service (DTS) supports two-way data synchronization between Tair instances. This synchronization solution is suitable for scenarios such as active geo-redundancy and geo-disaster recovery. For more information, see Configure two-way data synchronization between Tair instances.

FAQ

What do I do if a client does not support commands from a new data module?

You can define the commands from the new data module in your application code before you use the commands in your client. You can also use a Tair client that provides built-in support for these commands. For more information, see Tair clients.