The earch engine project now is in test environment. Tony now do some operations of search system to maintain the Elasticsearch cluster. This process is a new adventure in Elasticsearch.
har oct s nteerais a e inencers hicss i ane er n lssearch uste to toy er sarinte n. atineieer sad no aee de t tisarhe t ctrhameoce s, used tau reeae e r srd aalot o i shards led o or #### Cluster Health "Tony, tony. Come here. Your Elasticsearch cluster in state of red now." Operation engineer said loudly.
So, Tony log in server and check the Elasticsearch cluster state:
$curl '192.168.1.100:9200/_cluster/health?pretty'
{
eeaner rustrealthre"cluster_name" : "searcher-dev",
"status" : "red",
...
"unassigned_shards" : 3,
...
"number_of_pending_tasks" : xxx,
}
“Statuys is red because some primary shards are not allocated1. So which shards are not allocated?”, Tony wonders, so he send another request for detailed info:
curl -XGET '192.168.1.100:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason&pretty' | grep UNASSIGNED
blog 0 r UNASSIGNED INDEX_CREATED
test 0 r UNASSIGNED INDEX_CREATED
.kibana 0 r UNASSIGNED NODE_LEFT
file 0 p UNASSIGNED CLUSTER_RECOVERED
Some of failed item can be easily solved because it arises because index is created but shards not allocated may be because ES cluster is busy or index action is failed. But some failure can’t be solved, for example NODE_LEFT
, tony have to delete those index.
Opt
Tony found the search engine recently become very slow, so he decided to optimize the search performance. He found the following check list from the official site:
- Avoid types/Avoid putting unrelated structure in same index ormalize document structures
- Avoid large documents: 100MB
- Don’t return large result sets
- Disable
norms
anddoc_values
norms
: can be disabled if producing scores is not necessary on a field, this is typically true for fields that are only used for filteringdoc_values
: can be disabled on fields that are neither used for sorting nor for aggregations
Tony felt confused about the first item, so he did some search and found the reason.
Types
Because Lucene has no concept of document types, the type name of each document is stored with the document in a metadata field called _type
. When we search for documents of a particular type, Elasticsearch simply uses a filter on the _type
field to restrict results to documents of that type.
Types are not as well suited for entirely different types of data. If your two types have mutually exclusive sets of fields, that means half your index is going to contain “empty” values (the fields will be sparse), which will eventually cause performance problems. In these cases, it’s much better to utilize two independent indices.
It seems that tony met the general suggestions of ES site, now he have to detect where is the problem by himself.
Bottleneck Detection
From the result of health check, he notice there exists many pending tasks in queue for master to do. In normal cases, only the event which is related to whole cluster change have to be done by master node, otherwise, for example querying for result can be done by any single node. So, he query for the cluster’s master’s pending tasks lists for detailed result:
curl '192.168.1.100:9200/_cluster/pending_tasks?pretty'
{
"tasks" : [
{
"insert_order" : 2099,
"priority" : "URGENT",
"source" : "shard-started shard id [[blog-3525][0]], allocation id [uiBdTWN5SQ2axdQ1GrS5mg], primary term [0], message [after peer recovery]",
"executing" : true,
"time_in_queue_millis" : 2577,
"time_in_queue" : "2.5s"
},
...
}
In common cases, this queue should be empty and handled fast, otherwise, the master become the bottleneck of performance, which is because many index creation and management.
Considering the requirement of business system and design for scale2, tony split the data into different index via two dimension: time and user domain. When the bottleneck happens, there is some tests for creating so many index.
Tony now understand that the master is the bottleneck, but what is the operation master is doing? He query like following and found that thread pool rejected so many bulk index tasks:
curl -s -XGET "http://192.168.1.100:9200/_nodes/stats/thread_pool"
{
...
"thread_pool" : {
"bulk" : {
"threads" : 16,
"queue" : 0,
"active" : 0,
"rejected" : 10963,
"largest" : 16,
"completed" : 6820
},
Tuning Indexing
In oder to tune for faster index rate, tony did following:
Bottleneck Detection
topioto meoy vmoptonsylsmg `
- disk t rte thread ol ix pmtferrror
seciy an alterte ath for hea d ensure te directoy exist and has sfcen sae :eauath/a//elasticsearch/
inease meory
- add log
- don’t cross 32G
Tuning earh
earc role
Tuning Indexing
- disable monitoring
xpack.monitoring.enabled: false # in elasticsearch.yml, kibana.yml, logstash.yml
- increase index memory
indices.memory.index_buffer_size: 20% # default is 10%
- increase thread pool worker’s queue size “be careful about setting thread pool” – : more
thread_pool:
bulk:
size:
queue_size: 1000 # default 200
- disable refresh and replica for the time being
PUT /*/_settings
{
"index" : {
"refresh_interval" : "-1",
"number_of_replicas" : 0
}
}
Fix OOM
After customize settings for index, he restart cluster and testing for indexing. But sometime laster, he found the cluster is down. Checking the log and he found that ES was run out of memory and exit.
So he did the following:
jvm.options
: increase memory and enable dump.
-Xms4g # don't cross 32G
-Xmx4g
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
-XX:HeapDumpPath=/var/tmp/elasticsearch/
service elasticsearch force-reload
: reload ES JVM option’s update
Ref
- Thread Pool Settings
- [Monitoring Settings](https://www.elastic.co/guide/en/x-pack/5.2/monitoring-settings.hdt
- cat shards
- Tune for index rate
Written with StackEdit.
评论
发表评论