跳至主要内容

ELK Setup Problem List

In order to collect, visualize, analyze logs, we decided to use ELK to finish those related jobs.
In two serials of blog, we have already introduced some basics about Logstash and Elasticsearch:
If you are not familiar with Elasticsearch and Logstash, you may find those posts useful.

ELK Architecture

ELK architecture

First, we would like to go through the recommended architecture in official document:
  • Multiple nodes – for robustness and resilience against node failure;
  • Filebeat – which ensure the at-least-once delivery and enable load balance to send logs across multiple Logstash nodes;
  • Logstash – enable persistent queue, to provide protection across node failures;

Elasticsearch

Now, we come to how to setup Elasticsearch clusters. The basic configs to set up a cluster is very easy in Elasticsearch.

Bind Address

We can choose an array of addresses to let Elasticsearch to bind, so Elasticsearch can respond to different ips of same node. For example, if we config Elasticsearch to :
network.host: "some_ip"
We can only access Elasticsearch via some_ip but not localhost. So we can do following:
network.host: ["some_ip", "localhost"]
Bind Host vs Published Host
And network config can be split into two configs:
  • bind_host: hosts let other http client to query;
  • publish_host: host let other node to connect to;
Special Values
And it support some special values:
  • _[networkInterface]_: Addresses of a network interface, for example en0.
  • _local_: Any loopback addresses on the system, for example 127.0.0.1.
  • _site_: Any site-local addresses on the system, for example 192.168.0.1.
  • _global_: Any globally-scoped addresses on the system, for example 8.8.8.8.
One more point to notice is when we set a custom setting for network.host, Elasticsearch will assumes that we are moving from development mode to production mode, and upgrades a number of system startup checks from warnings to exceptions.

Set Cluster Name

The cluster name is very essential for a node to decide whether it should join this cluster. It will only send join request if they share same name.

Set Node Name (optional)

Setting a meaningful node name can make us more easy to locate the node and read log etc.

Set Discovery Hosts

When a node is up, it needs the host name to locate the cluster and join it. If we didn’t set network configuration, it will try to connect to other nodes in same server and make a cluster automatically.
When we need to move to production, we can set:
discovery.zen.ping.unicast.hosts:
   - 192.168.1.10:9300
   - 192.168.1.11 
   - xx.yy.com 

Finish the setting, we start the Elasticsearch in multiple nodes but fail with following errors.

Fail to Join Master

The first error is:
failed to send join request to master [{zMdeHUD}{zMdeHUDuR4SeYJ_h8Rg53A}{V1ZzP8rSTkK-Xd2xQ-jqeg}{192.168.1.100}{192.168.1.100:9300}{ml.max_open_jobs=10, ml.enabled=true}], reason [RemoteTransportException[[zMdeHUD][192.168.1.100:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[QWjjPXy][172.17.0.1:9300] handshake failed. unexpected remote node {zMdeHUD}{zMdeHUDuR4SeYJ_h8Rg53A}{V1ZzP8rSTkK-Xd2xQ-jqeg}{192.168.1.100}{192.168.1.100:9300}{ml.max_open_jobs=10, ml.enabled=true}]; ]
Searching give us this github issue. In this issue, problem is caused by wrong publish address. And we can notice that we have a strange [172.17.0.1:9300] connection endpoint in log message.
Checking our config files, we have the following config:
network.host: _site_
So it means Elasticsearch choose the 172.17.0.1 as its publish address, but not 192.xxx which causes the error. So we can change in two ways:
  • specify publish_host explicit;
  • change network.host to specific interface;
Version
Fix the above error, we met another one:
failed to send join request to master [{QWjjPXy}{QWjjPXyySuWh7npiB6RXFA}{cExbLIRPTLWC7uHZmHlyHg}{192.168.1.200}{192.168.1.200:9300}{ml.max_open_jobs=10, ml.enabled=true}], reason [RemoteTransportException[[QWjjPXy][192.168.1.200:9300][internal:discovery/zen/join]]; nested: IllegalStateException[index [.triggered_watches/Gubv8_pDQpG2uh8vOFTnNg] version not supported: 5.6.1 the node version is: 5.5.2
So, we have to upgrade Elasticsearch. In order to upgrade Elasticsearch, we should follow the three steps:
  • remove plugins: x-pack, analyzer-smartcn
  • upgrade Elasticsearch
  • re-install plugins

Kibana

Starting the Elasticsearch, we move to Kibana.

Kibana Service Start Failure

service kibana start
Normal starting show the following error:
Error: EACCES: permission denied, open '/opt/kibana/optimize/.babelcache.json'
    at Error (native)
    at Object.fs.openSync (fs.js:549:18)
    at Object.fs.writeFileSync (fs.js:1156:15)
    at save (/opt/kibana/node_modules/babel-core/lib/api/register/cache.js:35:19)
    at nextTickCallbackWith0Args (node.js:415:9)
    at process._tickDomainCallback (node.js:385:13)
    at Function.Module.runMain (module.js:443:11)
    at startup (node.js:134:18)
    at node.js:962:3
And this github issue show the solution:
vim /etc/default/kibana
user="kabana" group="root" chroot="/" chdir="/" nice=""
user="root" group="root" chroot="/" chdir="/" nice=""
Didn’t know whether it affect the Kibana, but no warning or error pop up this time.

Redirect Too Many Times

Now Kibana is started, but when access the web page, it complains that redirect too many times.
Because we have installed the x-pack and the security features is enabled, we need to add Elasticsearch config in kibana.yml so as to connect to it.
elasticsearch.username: "elastic"
elasticsearch.password: "changeme"

Logstash

Could Not Start by service

Logstash has different problems this time. When we run with:
service logstash start
it show no error, no logs, but also not run. We guess it might be the permission problem, so we try to edit /etc/default/logstash as we have done in Kibana. But it doesn’t work.
Finally, we found the following config file and change the user works.
vi /etc/init/logstash.conf

setuid root
setgid root

Deserialization Error

Logstash complians the errors when we start with persistent queue enabled.
This is the ssues we posted in elastic forum, and it is not solve by now.

Ref

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi

Elasticsearch: Join and SubQuery

Elasticsearch: Join and SubQuery Tony was bothered by the recent change of search engine requirement: they want the functionality of SQL-like join in Elasticsearch! “They are crazy! How can they think like that. Didn’t they understand that Elasticsearch is kind-of NoSQL 1 in which every index should be independent and self-contained? In this way, every index can work independently and scale as they like without considering other indexes, so the performance can boost. Following this design principle, Elasticsearch has little related supports.” Tony thought, after listening their requirements. Leader notice tony’s unwillingness and said, “Maybe it is hard to do, but the requirement is reasonable. We need to search person by his friends, didn’t we? What’s more, the harder to implement, the more you can learn from it, right?” Tony thought leader’s word does make sense so he set out to do the related implementations Application-Side Join “The first implementation

Implement isdigit

It is seems very easy to implement c library function isdigit , but for a library code, performance is very important. So we will try to implement it and make it faster. Function So, first we make it right. int isdigit ( char c) { return c >= '0' && c <= '9' ; } Improvements One – Macro When it comes to performance for c code, macro can always be tried. #define isdigit (c) c >= '0' && c <= '9' Two – Table Upper version use two comparison and one logical operation, but we can do better with more space: # define isdigit(c) table[c] This works and faster, but somewhat wasteful. We need only one bit to represent true or false, but we use a int. So what to do? There are many similar functions like isalpha(), isupper ... in c header file, so we can combine them into one int and get result by table[c]&SOME_BIT , which is what source do. Source code of ctype.h : # define _ISbit(bit) (1 << (