跳至主要内容

Elasticsearch Endless Queue?

Elasticsearch Endless Queue?

This day, Tony was notified that the Kibana which is used to query logs can’t working now. This a crucial part for log aggregation in new log system used in Microservice environment, so Tony set out to deal the it.

Rejected Task

Found nothing related in the Kibana’s log, Tony decided to checkout the Elasticsearch’s log. He found that there exists many long EsRejectedExecutionException.

[DEBUG][o.e.a.s.TransportSearchAction] [xLI0MAD] All shards failed for phase: [query]
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException:
 rejected execution of org.elasticsearch.transport.TransportService$7@30fadff6 
 on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6c9b1b8d
 [Running, pool size = 25, active threads = 25, queued tasks = 1921, completed tasks = 169477]]

Just google the exception show some SO threads which introduced the thread pool and request queue models of ES for handling search request. They suggest to enlarge queue size or make more nodes to ease the pressure.

“But those solution seems not suit my case. I just send a single request from Kibana, why the queue is overflowed?” Tony is wondered about the situation and decided to discover what under the hood.

Check List

Soon, Tony found a very informative discussion in ES forum which give a check list for checking:

  • Clients are simply sending too many queries too quickly in a fast burst, overwhelming the queue. We can monitor this with Node Stats over time to see if it’s bursty or smooth
  • We have got some very slow queries which get “stuck” for a long time, eating up threads and causing the queue to back up. You can enable the slow log to see if there are queries that are taking an exceptionally long time, then try to tune those
  • There may potentially be “unending” scripts written in Groovy or something. E.g. a loop that never exits, causing the thread to spin forever.
  • The hardware may be under-provisioned for your workload, and bottlenecking on some resource (disk, cpu, etc)
  • A temporary hiccup from your iSCSI target, which causes all the in-flight operations to block waiting for the disks to come back. It wouldn’t take a big latency hiccup to seriously backup a busy cluster… ES generally expects disks to always be available.
  • Heavy garbage collections could cause problems too. Check Node Stats to see if there are many/long old gen GCs running

Queue Size & Slow Query &

In order to check the queue info, Tony executed the following query:

$ curl 'http://localhost:9200/_nodes/stats/thread_pool?human&pretty'

	...
   "search" : {
     "threads" : 25,
     "queue" : 0,
     "active" : 0,
     "rejected" : 30715,
     "largest" : 25,
     "completed" : 173796
   },

The queue now is empty even though it has rejected many tasks. Just re-send Kibana search request and query again just giving similar but larger result.

This also proves that there is no slow query in process and unending script running.

Hardware

In order to check the hardware state, Tony first check the IO with iotop

  276 be/3 root        0.00 B/s   98.35 K/s  0.00 % 12.63 % [jbd2/sda1-8]
22229 be/4 elastics    0.00 B/s   24.59 K/s  0.00 %  1.18 % java -Xms8g -Xmx8g -XX:+UseConcMarkSweepG~rch -Edefault.path.conf=/etc/elasticsearch
 2623 be/4 mysql       0.00 B/s   10.54 K/s  0.00 %  0.18 % mysqld
22227 be/4 elastics    0.00 B/s   10.54 K/s  0.00 %  0.08 % java -Xms8g -Xmx8g -XX:+UseConcMarkSweepG~rch -Edefault.path.conf=/etc/elasticsearch
11854 be/4 elastics    0.00 B/s   56.20 K/s  0.00 %  0.00 % java -Xms8g -Xmx8g -XX:+UseConcMarkSweepG~rch -Edefault.path.conf=/etc/elasticsearch
11882 be/4 elastics    0.00 B/s   17.56 K/s  0.00 %  0.00 % java -Xms8g -Xmx8g -XX:+UseConcMarkSweepG~rch -Edefault.path.conf=/etc/elasticsearch

As result shows, the ES has very small rate of IO, which seems not bottleneck at all.

Then, Tony check the network with iftop but still not see any clue of ‘under-provisioned hardware’.

192.168.1.202      => 192.168.1.204      8.56kb  7.48kb  7.40kb
                   <=                    19.7kb  12.5kb  12.6kb
192.168.1.202      => 192.168.1.100      4.63kb  4.99kb  4.33kb
                   <=                    4.50kb  5.08kb  4.59kb
192.168.1.202      => 192.168.1.200      3.58kb  4.58kb  3.44kb
                   <=                    3.35kb  4.57kb  3.45kb

Finally, Tony checked the GC info of ES:

$ curl 'http://localhost:9200/_nodes/stats/jvm?human&pretty'
	...
   "gc" : {
     "collectors" : {
       "young" : {
         "collection_count" : 1662,
         "collection_time" : "53.1s",
         "collection_time_in_millis" : 53169
       },
       "old" : {
         "collection_count" : 1,
         "collection_time" : "229ms",
         "collection_time_in_millis" : 229
       }
     }
   },

It seems still fine!

What is a Search Task?

After checking all the items and found nothing, Tony is a little bit disappointed. He decided to check again. When checking the first one, he become somewhat confusing, “What is a search task? The query like following is one task or not?”

curl -XGET "http://localhost:9200/log-*/_search" -H 'Content-Type: application/json' -d'
{
  "query": {
    "match": {
      "application": "searcher"
    }
  }
}'

“No, this is not a task. First, the index wildcard will expend to many indices; then, every shard is the actual Lucene engine which execute the task, so every shard has a task.”

#tasks = #(shards of quried indices)

“And it seems too much indices and shards in our node, in which just a simple query may related to over 1000 tasks which overwhelming the queue. The granularity is too small – rolling in day” Tony think about the logstash config.

# logstash ES output config
index => "log-%{application}@%{[@metadata][env]}-%{+YYYY.MM.dd}"

PS

The shards over-allocations seems not a big problem, but it will cause much memory usage and CPU cycles in a single nodes. In this post, it cause too many tasks in a node and cause EsRejectedExecutionException for a single Kibana search.

Ref

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi

Elasticsearch: Join and SubQuery

Elasticsearch: Join and SubQuery Tony was bothered by the recent change of search engine requirement: they want the functionality of SQL-like join in Elasticsearch! “They are crazy! How can they think like that. Didn’t they understand that Elasticsearch is kind-of NoSQL 1 in which every index should be independent and self-contained? In this way, every index can work independently and scale as they like without considering other indexes, so the performance can boost. Following this design principle, Elasticsearch has little related supports.” Tony thought, after listening their requirements. Leader notice tony’s unwillingness and said, “Maybe it is hard to do, but the requirement is reasonable. We need to search person by his friends, didn’t we? What’s more, the harder to implement, the more you can learn from it, right?” Tony thought leader’s word does make sense so he set out to do the related implementations Application-Side Join “The first implementation

Implement isdigit

It is seems very easy to implement c library function isdigit , but for a library code, performance is very important. So we will try to implement it and make it faster. Function So, first we make it right. int isdigit ( char c) { return c >= '0' && c <= '9' ; } Improvements One – Macro When it comes to performance for c code, macro can always be tried. #define isdigit (c) c >= '0' && c <= '9' Two – Table Upper version use two comparison and one logical operation, but we can do better with more space: # define isdigit(c) table[c] This works and faster, but somewhat wasteful. We need only one bit to represent true or false, but we use a int. So what to do? There are many similar functions like isalpha(), isupper ... in c header file, so we can combine them into one int and get result by table[c]&SOME_BIT , which is what source do. Source code of ctype.h : # define _ISbit(bit) (1 << (