In order to collect, visualize, analyze logs, we decided to use ELK to finish those related jobs.
In two serials of blog, we have already introduced some basics about Logstash and Elasticsearch:
If you are not familiar with Elasticsearch and Logstash, you may find those posts useful.
First, we would like to go through the recommended architecture in official document:
When we need to move to production, we can set:
Finish the setting, we start the Elasticsearch in multiple nodes but fail with following errors.
Checking our config files, we have the following config:
Because we have installed the x-pack and the security features is enabled, we need to add Elasticsearch config in
Could Not Start by
Logstash has different problems this time. When we run with:
Finally, we found the following config file and change the user works.
This is the ssues we posted in elastic forum, and it is not solve by now.
In two serials of blog, we have already introduced some basics about Logstash and Elasticsearch:
If you are not familiar with Elasticsearch and Logstash, you may find those posts useful.
ELK Architecture
ELK architecture |
First, we would like to go through the recommended architecture in official document:
- Multiple nodes – for robustness and resilience against node failure;
- Filebeat – which ensure the at-least-once delivery and enable load balance to send logs across multiple Logstash nodes;
- Logstash – enable persistent queue, to provide protection across node failures;
Elasticsearch
Now, we come to how to setup Elasticsearch clusters. The basic configs to set up a cluster is very easy in Elasticsearch.Bind Address
We can choose an array of addresses to let Elasticsearch to bind, so Elasticsearch can respond to different ips of same node. For example, if we config Elasticsearch to :network.host: "some_ip"
We can only access Elasticsearch via some_ip
but not localhost. So we can do following:network.host: ["some_ip", "localhost"]
Bind Host vs Published Host
And network config can be split into two configs:bind_host
: hosts let other http client to query;publish_host
: host let other node to connect to;
Special Values
And it support some special values:_[networkInterface]_
: Addresses of a network interface, for example en0._local_
: Any loopback addresses on the system, for example 127.0.0.1._site_
: Any site-local addresses on the system, for example 192.168.0.1._global_
: Any globally-scoped addresses on the system, for example 8.8.8.8.
network.host
, Elasticsearch will assumes that we are moving from development mode to production mode, and upgrades a number of system startup checks from warnings to exceptions.Set Cluster Name
The cluster name is very essential for a node to decide whether it should join this cluster. It will only send join request if they share same name.Set Node Name (optional)
Setting a meaningful node name can make us more easy to locate the node and read log etc.Set Discovery Hosts
When a node is up, it needs the host name to locate the cluster and join it. If we didn’t set network configuration, it will try to connect to other nodes in same server and make a cluster automatically.When we need to move to production, we can set:
discovery.zen.ping.unicast.hosts:
- 192.168.1.10:9300
- 192.168.1.11
- xx.yy.com
Finish the setting, we start the Elasticsearch in multiple nodes but fail with following errors.
Fail to Join Master
The first error is:failed to send join request to master [{zMdeHUD}{zMdeHUDuR4SeYJ_h8Rg53A}{V1ZzP8rSTkK-Xd2xQ-jqeg}{192.168.1.100}{192.168.1.100:9300}{ml.max_open_jobs=10, ml.enabled=true}], reason [RemoteTransportException[[zMdeHUD][192.168.1.100:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[QWjjPXy][172.17.0.1:9300] handshake failed. unexpected remote node {zMdeHUD}{zMdeHUDuR4SeYJ_h8Rg53A}{V1ZzP8rSTkK-Xd2xQ-jqeg}{192.168.1.100}{192.168.1.100:9300}{ml.max_open_jobs=10, ml.enabled=true}]; ]
Searching give us this github issue. In this issue, problem is caused by wrong publish address. And we can notice that we have a strange [172.17.0.1:9300]
connection endpoint in log message. Checking our config files, we have the following config:
network.host: _site_
So it means Elasticsearch choose the 172.17.0.1
as its publish address, but not 192.xxx
which causes the error. So we can change in two ways:- specify
publish_host
explicit; - change
network.host
to specific interface;
Version
Fix the above error, we met another one:failed to send join request to master [{QWjjPXy}{QWjjPXyySuWh7npiB6RXFA}{cExbLIRPTLWC7uHZmHlyHg}{192.168.1.200}{192.168.1.200:9300}{ml.max_open_jobs=10, ml.enabled=true}], reason [RemoteTransportException[[QWjjPXy][192.168.1.200:9300][internal:discovery/zen/join]]; nested: IllegalStateException[index [.triggered_watches/Gubv8_pDQpG2uh8vOFTnNg] version not supported: 5.6.1 the node version is: 5.5.2
So, we have to upgrade Elasticsearch. In order to upgrade Elasticsearch, we should follow the three steps:- remove plugins: x-pack, analyzer-smartcn
- upgrade Elasticsearch
- re-install plugins
Kibana
Starting the Elasticsearch, we move to Kibana.Kibana Service Start Failure
service kibana start
Normal starting show the following error:Error: EACCES: permission denied, open '/opt/kibana/optimize/.babelcache.json'
at Error (native)
at Object.fs.openSync (fs.js:549:18)
at Object.fs.writeFileSync (fs.js:1156:15)
at save (/opt/kibana/node_modules/babel-core/lib/api/register/cache.js:35:19)
at nextTickCallbackWith0Args (node.js:415:9)
at process._tickDomainCallback (node.js:385:13)
at Function.Module.runMain (module.js:443:11)
at startup (node.js:134:18)
at node.js:962:3
And this github issue show the solution:vim /etc/default/kibana
user="kabana" group="root" chroot="/" chdir="/" nice=""
user="root" group="root" chroot="/" chdir="/" nice=""
Didn’t know whether it affect the Kibana, but no warning or error pop up this time.Redirect Too Many Times
Now Kibana is started, but when access the web page, it complains thatredirect too many times
. Because we have installed the x-pack and the security features is enabled, we need to add Elasticsearch config in
kibana.yml
so as to connect to it.elasticsearch.username: "elastic"
elasticsearch.password: "changeme"
Logstash
Could Not Start by service
Logstash has different problems this time. When we run with:service logstash start
it show no error, no logs, but also not run. We guess it might be the permission problem, so we try to edit /etc/default/logstash
as we have done in Kibana. But it doesn’t work.Finally, we found the following config file and change the user works.
vi /etc/init/logstash.conf
setuid root
setgid root
Deserialization Error
Logstash complians the errors when we start with persistent queue enabled.This is the ssues we posted in elastic forum, and it is not solve by now.
Ref
Written with StackEdit.
评论
发表评论