From the Database Team
Redis is generally known as a single-process, single-thread model. This is not true. Redis also runs multiple backend threads to perform backend cleaning works, such as cleansing the dirty data and closing file descriptors. In Redis, the main thread is responsible for the major tasks, including but not limited to: receiving the connections from clients, processing the connection read/write events, parsing requests, processing commands, processing timer events, and synchronizing data. Only one CPU core runs a single process and single thread. For small packets, a Redis server can process 80,000 to 100,000 QPS. A larger QPS is beyond the processing capacity of a Redis server. A common solution is to partition the data and adopt multiple servers in distributed architecture. However, this solution also has many drawbacks. For example, too many Redis servers to manage; some commands that are applicable to a single Redis server do not work on the data partitions; data partitions cannot solve the hot spot read/write problem; and data skew, redistribution, and scale-up/down become more complex. Due to restrictions of the single process and single thread, we hope that the multi-thread can be reconstructed to fully utilize the advantages of the SMP multi-core architecture, thus increasing the throughput of a single Redis server. To make Redis multi-threaded, the simplest way to think of is that every thread performs both I/O and command processing. However, as the data structure processed by Redis is complex, the multi-thread needs to use the locks to ensure the thread security. Improper handling of the lock granularity may deteriorate the performance.
We suggest that the number of I/O threads should be increased to enable an independent I/O thread to read/write data in the connections, parse commands, and reply data packets, and still let a single thread process the commands and execute the timer events. In this way, the throughput of a single Redis server can be increased.
There are three thread types, namely:
The stress test result indicates that the read/write performance can be improved by about three folds in the small packet scenario.
When the master sends the synchronization data to the slave, data is sent in the I/O thread. When reading data from the master, the slave reads the full data from the worker thread, and the incremental data from the I/O thread. This can efficiently increase the synchronization speed.
The first task is to increase the number of I/O threads and optimize the I/O read/write capability. Next, we can break down the worker thread so that each thread completes I/O reading, as well as the work of the worker thread.
To learn more about Alibaba Cloud ApsaraDB for Redis, visit www.alibabacloud.com/product/apsaradb-for-redis
Ensuring Data Reliability and Availability for Enterprise Cloud
2,599 posts | 764 followers
FollowAlibaba Clouder - February 11, 2019
ApsaraDB - June 22, 2021
Alibaba Clouder - October 27, 2020
XianYu Tech - November 22, 2021
frank.li - February 24, 2021
Alibaba Cloud Storage - April 10, 2019
There is only one worker thread at any time, multi-thread here means I/O thread.
2,599 posts | 764 followers
FollowLeverage cloud-native database solutions dedicated for FinTech.
Learn MoreMigrate your legacy Oracle databases to Alibaba Cloud to save on long-term costs and take advantage of improved scalability, reliability, robust security, high performance, and cloud-native features.
Learn MoreMigrating to fully managed cloud databases brings a host of benefits including scalability, reliability, and cost efficiency.
Learn MoreDBStack is an all-in-one database management platform provided by Alibaba Cloud.
Learn MoreMore Posts by Alibaba Clouder
5480370588521795 October 9, 2019 at 2:36 am
maybe one db one worker thread?