×
Community Blog Alibaba Cloud Elasticsearch Performance Optimization

Alibaba Cloud Elasticsearch Performance Optimization

This article lists some ideas on how to optimize the performance of Elasticsearch and improve index and query throughput.

Released by ELK Geek

Elasticsearch is a popular, distributed open-source search and analytics engine. It features high performance, scalability, and fault tolerance. It strengthens the search capabilities of Apache Lucene and significantly improves control over massive data indexing and querying. Based on the practical experience of the open-source community and Alibaba Cloud platform, let’s understand how to optimize the performance of Elasticsearch to improve index and query throughput.

1) Suggestions for Elasticsearch Deployment

1.1) Use an SSD

The biggest bottleneck in Elasticsearch is the read and write performance of disks, especially the random read performance. The query speed based on solid-state drives (SSDs) such as PCI express (PCIe) and Serial Advanced Technology Attachment (SATA) SSDs is generally 5 to 10 times faster than that of hard disks such as SATA and SAS disks, and the write performance is not significantly optimized.

In scenarios where document retrieval performance requirements are high, we recommend using SSDs to store data and set the ratio of memory to hard disk space to 1:10. However, in scenarios where log analysis concurrency requirements are low, use hard disks to store data and set the ratio of memory to hard disk space to 1:50. We recommend data storage of not more than 2 TB (or 5 TB at most) on a single node to avoid slow queries and system instability.

The following table compares the full-text retrieval performance of SATA disks and SSDs when 1 TB data is stored on a single node.

Test Environment: Elasticsearch 5.5.3, 1 billion household registration records, a single node with a 16-core CPU and 64 GB memory, twelve 6 TB SATA disks, and two 1.5 TB SSDs.

Disk type Concurrency QPS Average retrieval RT RT for 50% of requests RT for 90% of requests IOPS
SATA disk 10 17 563ms 478ms 994ms 1200
SATA disk 50 64 773ms 711ms 1155ms 1800
SATA disk 100 110 902ms 841ms 1225ms 2040
SATA disk 200 84 2369ms 2335ms 2909ms 2400
SSD 10 94 105ms 90ms 200ms 25400
SSD 50 144 346ms 341ms 411ms 66000
SSD 100 152 654ms 689ms 791ms 60000
SSD 200 210 950ms 1179ms 1369ms 60000

1.2) Configure the JVM with Half of the Machine Memory, Up to 32 GB

Modify the conf/jvm.options configuration file. Set -Xms and -Xmx to the same value. We recommend setting them to about half of the machine memory. The remaining half is reserved for the operating system to cache data. Also, we recommend setting the Java Virtual Machine (JVM) memory size to at least 2 GB. Otherwise, Elasticsearch may fail to start or memory overflow may occur due to insufficient memory. We recommend setting the JVM size to not more than 32 GB. Otherwise, JVM disables pointer compression of memory objects, wasting memory resources. When the machine memory size is greater than 64 GB, we recommend setting both -Xms and -Xmx to 30 GB.

1.3) Configure Dedicated Master Nodes for Large-scale Clusters to Avoid Split-brain

Elasticsearch master nodes manage cluster metadata, add and delete indexes and nodes, and regularly broadcast the latest cluster status to each node. In a large-scale cluster, we recommend specifically configuring dedicated master nodes to manage cluster data and not to store data. This frees the nodes from the burden of data read and write.

# Configure dedicated master nodes (conf/elasticsearch.yml):
node.master:true
node.data: false
node.ingest:false

# Configure data nodes (conf/elasticsearch.yml):
node.master:false
node.data:true
node.ingest:true

By default, each Elasticsearch node is a candidate master node and a data node. We recommend setting the minimum_master_nodes parameter to more than half of the number of candidate master nodes. In this case, a master node gets elected only when there are sufficient candidate master nodes.

For example, for a three-node cluster, the value of the minimum_master_nodes parameter is changed from the default value 1 to 2.

# Configure the minimum number of master nodes (conf/elasticsearch.yml):
discovery.zen.minimum_master_nodes: 2

1.4) Optimize the Linux Operating System

Disable swap partitions to prevent memory swap from degrading performance. Comment out the lines that contain swap partitions in the /etc/fstab file.

sed -i '/swap/s/^/#/' /etc/fstab
swapoff -a

Set the maximum number of files that a user can open to 655360 or larger.

echo "* - nofile 655360" >> /etc/security/limits.conf

Increase the number of single-user threads.

echo "* - nproc 131072" >> /etc/security/limits.conf

Set the maximum number of memory maps that a single process can use.

echo "vm.max_map_count = 655360" >> /etc/sysctl.conf

Parameter modification takes effect immediately.

sysctl -p

2) Suggestions for Index Performance Optimization

2.1) Set Appropriate Numbers of Index Shards and Replicas

We recommend setting the number of index shards to an integer multiple of the number of nodes in the cluster. Set the number of replicas to 0 during initial data import and to 1 in a production environment. When there is only one replica, the cluster’s data is not lost even if any single node crashes. When there are multiple replicas, more storage space gets occupied, the operating system’s cache hit rate decreases, and the retrieval performance may require improvement. We recommend creating no more than three index shards for a single node and assign 10 GB to 40 GB for each shard. It is impossible to change the number of shards. However, the change in the number of replicas is possible only after the configuration. In Elasticsearch 6.x and earlier versions, the number of shards is 5 and the number of replicas is 1, by default. Since Elasticsearch 7.0, the default number of shards has changed to 1, but the default number of replicas remains 1.

The following table lists the impact on write performance by different numbers of shards. Test environment: 7-node Elasticsearch 6.3 cluster, 30 GB news data, a single node with a 56-core CPU, 380 GB memory, and 3 TB SSD, 0 replicas, 20 threads, and 10 MB data submitted in each batch.

Cluster index shards Index shards per node Time consumption for writing
2 0/1 600s
7 1 327s
14 2 258s
21 3 211s
28 4 211s
56 8 214s

Set indexes as shown below.

curl -XPUT http://localhost:9200/fulltext001?pretty -H 'Content-Type: application/json'   -d '
{
    "settings" : {
      "refresh_interval": "30s",
      "merge.policy.max_merged_segment": "1000mb",
      "translog.durability": "async",
      "translog.flush_threshold_size": "2gb",
      "translog.sync_interval": "100s",
      "index" : {
        "number_of_shards" : "21",
        "number_of_replicas" : "0"
      }
    }
}
'

Set mappings as shown below.

curl -XPOST http://localhost:9200/fulltext001/doc/_mapping?pretty  -H 'Content-Type: application/json' -d '
{
    "doc" : {
        "_all" : {
            "enabled" : false
         },
        "properties" : {
          "content" : {
            "type" : "text",
            "analyzer":"ik_max_word"
          },
          "id" : {
            "type" : "keyword"
          }
        }
    }
}
'

Write data as shown below.

curl -XPUT 'http://localhost:9200/fulltext001/doc/1?pretty' -H 'Content-Type: application/json' -d '
{
    "id": "https://www.huxiu.com/article/215169.html",
    "content": "“娃娃机,迷你KTV,VR体验馆,堪称商场三大标配‘神器’。”一家地处商业中心的大型综合体负责人告诉懂懂笔记,在过去的这几个月里,几乎所有的综合体都“标配”了这三种“设备”…"
}'

Now, change the number of replicas.

curl -XPUT "http://localhost:9200/fulltext001/_settings" -H 'Content-Type: application/json' -d'
{
    "number_of_replicas": 1
}'

2.2) Use Batch Requests

The performance of batch requests is much better than that of single index requests. Call the batch commit API when data is written. We recommend committing 5 MB to 15 MB of data in a batch. For example, the write performance is better when about ten thousand 1 KB records are committed in a batch or about two thousand 5 KB records are committed in a batch.

Refer to the commands below to call the batch request API.

curl -XPOST "http://localhost:9200/_bulk" -H 'Content-Type: application/json' -d'
{ "index" : { "_index" : "test", "_type" : "_doc", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_type" : "_doc", "_id" : "2" } }
{ "create" : { "_index" : "test", "_type" : "_doc", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_type" : "_doc", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
'

2.3) Send Data through Multiple Processes or Threads

When data is written in batches through a single thread, you cannot make full use of server CPU resources. You can try to adjust the number of write threads or submit write requests to the Elasticsearch server on multiple clients. Similar to batch resize requests, the optimal number of workers is determined only by testing. You can perform testing by gradually increasing the number of workers until the cluster's I/O or CPU utilization reaches the maximum.

2.4) Increase the Refresh Interval

Refresh is a lightweight process for writing and opening a new segment in Elasticsearch. By default, each shard gets refreshed automatically every second. Therefore, Elasticsearch is a near real-time search platform, and changes in documents will be visible within a second.

However, shards do not need to be refreshed every second in some scenarios. While using Elasticsearch to index a large number of log files, you may want to improve index speed instead of the near real-time search. Set refresh_interval to reduce the refresh frequency of each index.

Set the refresh interval API using the commands below.

curl -XPUT "http://localhost:9200/index" -H 'Content-Type: application/json' -d'
{
    "settings" : {
      "refresh_interval": "30s"
    }
}'

The refresh_interval is dynamically on an existing index. To create a large index in the production environment, disable automatic refresh for shards and call them back when you start using the index.

curl -XPUT "http://localhost:9200/index/_settings" -H 'Content-Type: application/json' -d'
{ "refresh_interval": -1 }'
curl -XPUT "http://localhost:9200/index/_settings" -H 'Content-Type: application/json' -d'
{ "refresh_interval": "1s" }'

2.5) Design Reasonable Field Types for Mapping Configuration

When Elasticsearch writes a document but the index name specified in the request does not exist, an index is automatically created and the possible field types are predicted based on the document content. However, this is not the most efficient approach. Instead, designing reasonable field types according to application scenarios greatly helps.

For example, write the following record:

curl -XPUT "http://localhost:9200/twitter/doc/1?pretty" -H 'Content-Type: application/json' -d'
{
    "user": "kimchy",
    "post_date": "2009-11-15T13:12:00",
    "message": "Trying out Elasticsearch, so far so good?"
}'

When mappings of the automatically created index are queried, the post_date field is automatically identified as the date type, while the message and user fields are set to redundant text and keyword fields. This reduces the write speed and occupies more disk space.

curl -XGET "http://localhost:9200/twitter"
{
  "twitter": {
    "mappings": {
      "doc": {
        "properties": {
          "message": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "post_date": {
            "type": "date"
          },
          "user": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      }
    },
    "settings": {
      "index": {
        "number_of_shards": "5",
        "number_of_replicas": "1",
      }
    }
  }
}

Set proper numbers of shards and replicas for the index and configure field types and analyzers based on business scenarios. If you do not need to merge all fields, disable the _all field, and use copy_to to merge fields.

curl -XPUT "http://localhost:9200/twitter?pretty" -H 'Content-Type: application/json' -d'
{
    "settings" : {
      "index" : {
        "number_of_shards" : "20",
        "number_of_replicas" : "0"
      }
    }
}'
curl -XPOST "http://localhost:9200/twitter/doc/_mapping?pretty" -H 'Content-Type: application/json' -d'
{
    "doc" : {
        "_all" : {
        "enabled" : false
    },
    "properties" : {
          "user" : {
          "type" : "keyword"
          },
          "post_date" : {
            "type" : "date"
          },
          "message" : {
            "type" : "text",
            "analyzer" : "cjk"
          }
        }
    }
}'

3) Suggestions for Query Performance Optimization

3.1) Cache Filter-based Query Results and Shard-based Query Results

By default, the correlations between each returned record and query statement are calculated during the Elasticsearch query. However, for non-full-text indexes, you may only want to find the target data rather than the correlation between query results and query criteria. In this case, using a filter to free Elasticsearch is helpful for scoring and caching the filter results for subsequent queries with the same filter. This helps improve query efficiency.

  • Normal Query
curl -XGET "http://localhost:9200/twitter/_search" -H 'Content-Type: application/json' -d'
{
  "query": {
    "match": {
      "user": "kimchy"
    }
  }
}'
  • Filter-based Query
curl -XGET "http://localhost:9200/twitter/_search" -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "filter": {
         "match": {
          "user": "kimchy"
        }
      }
    }
  }
}'

Shard-based query results are cached for cache aggregation, hint word results, and hit count rather than returned documents. Therefore, it works only when the value of search_type is count.

Set the following parameter to the size of the shard cache. The default size is 1% of the JVM heap size. Another way is to manually set the size in the config/elasticsearch.yml file:

indices.requests.cache.size: 1%

View the memory usage of the cache (name: the node name; query_cache: the cache for filter-based query results; request_cache: the cache for shard-based query results; fielddata: the cache for field data; segments: the index segments).

curl -XGET "http://localhost:9200/_cat/nodes?h=name,query_cache.memory_size,request_cache.memory_size,fielddata.memory_size,segments.memory&v" 

3.2) Use the _routing Field

When Elasticsearch writes a document, the document is routed to a shard in an index by using a formula. The default formula is as follows:

shard_num = hash(_routing) % num_primary_shards

By default, the value of the _routing field is the _id field. Alternatively, set a frequently queried field as the routing field based on specific business requirements. For example, use the user ID and region as the routing fields, and filter out unnecessary shards to speed up the query.

Specify a route when a document is being written.

curl -XPUT "http://localhost:9200/my_index/my_type/1?routing=user1" -H 'Content-Type: application/json' -d'
{
  "title": "This is a document",
  "author": "user1"
}'

If no route is specified for the query, all shards are queried.

curl -XGET "http://localhost:9200/my_index/_search" -H 'Content-Type: application/json' -d'
{
  "query": {
    "match": {
      "title": "document"
    }
  }
}'

The query result is as follows:

{
  "took": 2,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  }
  ......
}

If a route is specified for the query, only one shard is queried.

curl -XGET "http://localhost:9200/my_index/_search?routing=user1" -H 'Content-Type: application/json' -d'
{
  "query": {
    "match": {
      "title": "document"
    }
  }
}'

The query result is as follows:

{
  "took": 1,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  }
  ......
}

3.3) Merge Read-only Indexes and Disable Historical Data Indexes

Read-only indexes are merged into a large segment to reduce index fragmentation and resident JVM heap memory. If historical data indexes cannot be queried based on business requirements, disable them to reduce JVM memory usage.

  • API for Merging Indexes
curl -XPOST "http://localhost:9200/abc20180923/_forcemerge"
  • API for Disabling Indexes:
curl -XPOST "http://localhost:9200/abc2017*/_close"

3.4) Configure Query Aggregation Nodes

Query aggregation nodes are capable of sending particle query requests to other nodes to collect and merge results and respond to the clients that send the query requests. Configure higher CPU and memory specifications for query aggregation nodes to speed up queries and operations and increase the cache hit ratio. For example, when a customer uses an Elasticsearch cluster with twenty-five 8-core 32 GB memory nodes, the queries per second (QPS) are about 4,000. Add six 16-core 32 GB memory nodes as query aggregation nodes. Then, check the CPU utilization and JVM heap memory usage of the server and modify the cache, shard, and replica parameters. This boosts the QPS to 12,000.

# Configure query aggregation nodes (conf/elasticsearch.yml):
node.master: false
node.data: false
node.ingest:false

3.5) Set Fields and the Number of Records to Be Read

By default, the first 10 records are returned for a query request, and up to 10,000 records can be read at a time. The from and size parameters specify the number of records to be read, preventing too many records from being read at a time. The _source parameter specifies information about the fields to be returned, avoiding large fields.

  • Sample Query Request
curl -XGET http://localhost:9200/fulltext001/_search?pretty  -H 'Content-Type: application/json' -d ' 
{
  "from": 0,
  "size": 10,
  "_source": "id",
  "query": {
    "bool": {
      "must": [
        {"match": {"content":"虎嗅"}}
      ]
    }
  },
  "sort": [
    {
      "id": {
        "order": "asc"
      }
    }
  ]
}
'

3.6) Prevent Prefix Fuzzy Matching

By default, Elasticsearch supports fuzzy matching by using the *? regular expression. Performing fuzzy matching, especially prefix fuzzy matching, on an index with more than 1 billion data entries, takes a long time and may even cause memory overflow. Therefore, avoid such operations in the production environment with high-concurrency query requests.

For example, when a customer performs a fuzzy query based on the license plate number:A8848, the cluster suffers a high load. Therefore, it is best to solve the performance problem through data preprocessing. Specifically, add the redundant field "license plate number.keyword", segment all license plate numbers by the number of characters, for example, 1, 2, 3, ..., 7, and store the results to this field in advance. For example, the field content is Shanghai,A,8,4, ShanghaiA,A8,88,84,48, ShanghaiA8..., ShanghaiA88488. Then, query license plate number.keyword:A8848.

3.7) Avoid Index Sparsity

By default, one may create multiple types in an index in Elasticsearch versions earlier than V6.x, but only one type in Elasticsearch 6.x and later. While creating multiple fields of different types for a type or merging hundreds of indexes of different fields into one index, index sparsity occurs.

We recommend creating only one type for each index and creating separate indexes for data with different fields, rather than merging these indexes into a large index. Then, each query request may read the corresponding index as needed to avoid querying large indexes and scanning all the records. This speeds up the queries.

3.8) Add and Upgrade Cluster Nodes

Generally, an Elasticsearch cluster with a larger number of servers with higher specifications provides higher processing capabilities.

The following table lists the query performance test results of different node scales.

Test Environment: Elasticsearch 5.5.3 cluster, a single node with a 16-core CPU, 64 GB memory, and 2 TB SSD, 1 billion household registration records, and 1 TB data.

Cluster nodes Replicas Average RT for 10 concurrent retrievals Average RT for 50 concurrent retrievals Average RT for 100 concurrent retrievals Average RT for 200 concurrent retrievals QPS for 200 concurrent retrievals CPU utilization for 200 concurrent retrievals CPU I/O wait for 200 concurrent retrievals
1 0 77ms 459ms 438ms 1001ms 200 16% 52%
3 0 38ms 103ms 162ms 298ms 669 45% 34%
3 2 271ms 356ms 577ms 818ms 244 19% 54%
10 0 21ms 36ms 48ms 81ms 2467 40% 10%

The following table lists the write performance test results for different node scales.

Test Environment: Elasticsearch 6.3.2 cluster, a single node with a 16-core CPU, 64 GB memory, and 2 TB SSD, 1 billion household registration records, 1 KB for each record, 1 TB data set, and 20 current write threads.

Cluster nodes Replicas Write TPS RT Cluster CPU utilization
10 0 88945 11242s 50%
50 0 180638 5535s 20%

We recommend running tests and determining the best practices that meet your requirements based on the actual data and usage scenario. Using the elastic scaling feature of Alibaba Cloud Elasticsearch is the best way to increase disk space and add and upgrade nodes as needed.

In September 2017, Alibaba Cloud offered Elastic Stack in the cloud-based on the open-source Elasticsearch and the commercial X-Pack plug-in. At the same time, the Alibaba Cloud Elasticsearch Customer Service Team shared cases and practices to address pain points for off-premises businesses. For more information, visit https://www.alibabacloud.com/product/elasticsearch

0 0 0
Share on

Alibaba Clouder

2,599 posts | 764 followers

You may also like

Comments

Alibaba Clouder

2,599 posts | 764 followers

Related Products