All Products
Search
Document Center

Tair (Redis® OSS-Compatible):Performance whitepaper of Redis Open-Source Edition instances and Tair DRAM-based instances

Last Updated:Nov 05, 2024

This topic describes the performance test results, test environment, test tool, and test method of Tair DRAM-based instances and Redis Open-Source Edition instances.

Test results

Performance tests are conducted on more than a dozen basic Redis commands, including SET and GET, with a focus on several key metrics.

Command

Tair DRAM-based instance

Redis Open-Source Edition instance

QPS

Average Latency

99th Percentile Latency

QPS

Average Latency

99th Percentile Latency

SET

282,656

0.45

0.86

142,376

0.45

0.72

GET

519,761

0.24

0.36

204,690

0.31

0.47

ZADD

208,169

0.62

1.14

113,135

0.57

0.78

ZSCORE

463,904

0.27

0.40

170,163

0.37

0.54

HSET

260,069

0.49

1.03

124,613

0.51

0.97

HGET

494,603

0.25

0.37

188,903

0.34

0.52

LPUSH

286,324

0.44

0.84

153,269

0.42

0.59

LINDEX

414,070

0.30

0.45

157,568

0.40

0.58

SADD

292,738

0.44

0.86

140,155

0.45

0.63

SISMEMBER

531,139

0.24

0.34

181,492

0.35

0.52

EVALSHA

214,303

0.60

1.12

101,136

0.63

0.91

Test metrics:

  • QPS: the number of read and write operations processed per second.

  • Average Latency: the average latency of operations. Unit: milliseconds.

  • 99th Percentile Latency: the maximum latency experienced by 99% of operations. Unit: milliseconds. For example, a value of 0.5 indicates that 99% of requests can be processed within 0.5 milliseconds.

Note
  • The test results represent the average values obtained from multiple tests conducted across multiple zones and instances.

  • The latency in the test results represents the total end-to-end delay, which includes the amount of time required by packets to queue on the database and the stress testing client.

  • The test results are affected by multiple uncontrollable factors. A margin of error of approximately 10% is considered reasonable.

  • The test results represent only the outcomes from single-command tests conducted on new instances. However, it is important to consider specific business scenarios for stress testing in production environments.

  • The test results reflect the maximum performance of the instances. In production environments, we recommend that you do not keep instances operating at peak load conditions.

Test environment

Database

Item

Description

Region and zone

Beijing Zone L, Hangzhou Zone K, Shanghai Zone N, and Shenzhen Zone C

Note

This test is performed across multiple regions. The test report only represents the average performance level of the preceding zones.

Instance architecture

The standard master-replica architecture is used. For more information, see Standard architecture.

Note

Performance description of other architectures:

  • Cluster instance in proxy mode: When the requested keys are evenly distributed on a cluster instance in proxy mode, the performance of the instance is no less than n times the performance of a standard instance.

  • Cluster instance in direct connection mode: When the requested keys are evenly distributed on a cluster instance in direct connection mode, the performance of the instance is equal to n times the performance of a standard instance.

  • Read/write splitting instance: The write performance of a read/write splitting instance is slightly lower than the performance of a standard instance due to increased replication traffic. The read performance is no less than n times the performance of a standard instance.

n indicates the number of shards in a cluster instance or the total number of nodes in a read/write splitting instance.

Instance type

The test results are minimally affected by the instance types used. For this test, the following instance types are selected:

  • Tair DRAM-based instance with 8 GB of memory (tair.rdb.8g)

  • Redis Open-Source Edition instance with 8 GB of memory (redis.shard.xlarge.ce)

For more information about the instance types, see Overview.

Client

Item

Description

Host of the client

An Elastic Compute Service (ECS) instance of the ecs.g7.8xlarge type. For more information, see Overview of instance families.

Region and zone

The region and zone of the instance.

Operating system

Alibaba Cloud Linux 3

Network

The client resides in the same virtual private cloud (VPC) as the Tair instance and is connected to the Tair instance over the VPC.

Test tool

The open source tool resp-benchmark is used for stress testing. Common test items, such as the SET and GET commands, align with those used in redis-benchmark. You can use other test items to more accurately replicate real-world user interactions with the system. This involves maxing out Tair with multiple threads to reveal the underlying performance of Tair under stress.

Note

You can run the resp-benchmark --help command to obtain more information about the configuration items, or visit the GitHub homepage of the tool for additional information.

Installation method

pip install resp-benchmark==0.1.7

Test items

Important
  • We recommend that you first clear the database during each test to prevent interference with existing data.

  • The resp-benchmark tool automatically selects a relatively suitable number of connections when the connection count is not specified. However, to accurately measure data under extreme load conditions, we recommend that you manually adjust the number of connections. For example, you can set the number to 128 by using the -c 128 parameter. If the number of connections to the database is low, the database does not receive a sufficient number of concurrent requests, which results in low QPS. If the number of connections is high, the database may not have sufficient resources to handle the excessive load. This causes packets to queue up in the network for an extended period of time, which leads to increased latency. Due to the influence of various factors, it is challenging to provide a fixed configuration for the number of connections. Common settings for the number of connections include 32, 64, 128, 192, and 256. You can adjust the number based on actual test results.

The following section provides examples for testing Redis commands:

  • SET

    This metric measures the performance of the SET command.

    This test involves running the SET command, in which the keys range from 0 to 10,000,000 (formatted as key_0000000000 to key_0009999999) and each value is 64 bytes in size. The test runs for 20 seconds.

    resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "SET {key uniform 10000000} {value 64}"
  • GET

    This metric measures the performance of the GET command.

    1. Create a dataset that consists of key-value pairs, in which the keys range from 0 to 10,000,000 and each value is 64 bytes in size.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 --load -c 256 -P 10 -n 10000000 "SET {key sequence 10000000} {value 64}"
    2. Benchmark the performance of the GET command for a period of 20 seconds.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "GET {key uniform 10000000}"
  • ZADD

    This metric measures the performance of the ZADD command.

    This test evaluates the write performance of ZADD by adding fields to sorted sets, in which the keys range from 0 to 1,000 and the scores vary from 0 to 70,000. Each key can contain up to 10,000 fields. The test runs for 20 seconds.

    resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 "ZADD {key uniform 1000} {rand 70000} {key uniform 10000}"
  • ZSCORE

    This metric measures the performance of the ZSCORE command.

    1. Create a dataset that consists of key-value pairs, in which the keys range from 0 to 1,000 and the scores vary from 0 to 70,000. Each key can contain up to 10,007 fields.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 --load -c 256 -P 10 -n 10007000 "ZADD {key sequence 1000} {rand 70000} {key sequence 10007}"
    2. Benchmark the performance of the ZSCORE command for a period of 20 seconds.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "ZSCORE {key uniform 1000} {key uniform 10007}"
  • HSET

    This metric measures the performance of the HSET command.

    This test involves running the HSET command, in which the keys range from 0 to 1,000 and each value is 64 bytes in size. Each key contains fields that range from 0 to 10,000. The test runs for 20 seconds.

    resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "HSET {key uniform 1000} {key uniform 10000} {value 64}"
  • HGET

    This metric measures the performance of the HGET command.

    1. Create a dataset that consists of key-value pairs, in which the keys range from 0 to 1,000 and each value is 64 bytes in size. Each key contains 10,007 fields.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 --load -c 256 -P 10 -n 10007000 "HSET {key sequence 1000} {key sequence 10007} {value 64}"
    2. Benchmark the performance of the HGET command for a period of 20 seconds.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "HGET {key uniform 1000} {key uniform 10007}"
  • LPUSH

    This metric measures the performance of the LPUSH command.

    This test involves running the LPUSH command, in which the keys range from 0 to 1,000 and each value is 64 bytes in size. The test runs for 20 seconds.

    resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "LPUSH {key uniform 1000} {value 64}"
  • LINDEX

    This metric measures the performance of the LINDEX command.

    1. Create a dataset that consists of key-value pairs, in which the keys range from 0 to 1,000 and each value is 64 bytes in size. Each key contains 10,000 fields.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 --load -c 256 -P 10 -n 10000000 "LPUSH {key sequence 1000} {value 64}"
    2. Benchmark the performance of the LINDEX command for a period of 20 seconds.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "LINDEX {key uniform 1000} {rand 10000}"
  • SADD

    This metric measures the performance of the SADD command.

    This test involves running the SADD command, in which the keys range from 0 to 1,000 and each value is 64 bytes in size. The test runs for 20 seconds.

    resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "SADD {key uniform 1000} {value 64}"
  • SISMEMBER

    This metric measures the performance of the SISMEMBER command.

    1. Create a dataset that consists of key-value pairs, in which the keys range from 0 to 1,000 and each value is 64 bytes in size. Each key contains 10,007 fields.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 --load -c 256 -P 10 -n 10007000 "SADD {key sequence 1000} {key sequence 10007}"
    2. Benchmark the performance of the SISMEMBER command for a period of 20 seconds.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "SISMEMBER {key uniform 1000} {key uniform 10007}"
  • EVALSHA

    This metric measures the performance of running the SET command in the EVALSHA context. In this case, the SET command is used to store keys ranging from 0 to 10,000,000, with each value being 64 bytes in size.

    1. Load the Lua script:

      redis-cli -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 SCRIPT LOAD "return redis.call('SET', KEYS[1], ARGV[1])"
    2. Benchmark the performance of the EVALSHA command for a period of 20 seconds.

      resp-benchmark -h r-bp1u****8qyvemv2em.redis.rds.aliyuncs.com -p 6379 -s 20 "EVALSHA d8f2fad9f8e86a53d2a6ebd960b33c4972cacc37 1 {key uniform 10000000} {value 64}"