跳至主要内容

博文

目前显示的是 九月, 2017的博文

ELK Setup Problem List

In order to collect, visualize, analyze logs, we decided to use ELK to finish those related jobs. In two serials of blog, we have already introduced some basics about Logstash and Elasticsearch: Elasticsearch Learning (1): Introduction Logstash Learning (1): Basic If you are not familiar with Elasticsearch and Logstash, you may find those posts useful. ELK Architecture ELK architecture First, we would like to go through the recommended architecture in official document : Multiple nodes – for robustness and resilience against node failure; Filebeat – which ensure the at-least-once delivery and enable load balance to send logs across multiple Logstash nodes; Logstash – enable persistent queue, to provide protection across node failures; Elasticsearch Now, we come to how to setup Elasticsearch clusters. The basic configs to set up a cluster is very easy in Elasticsearch. Bind Address We can choose an array of addresses to let Elasticsearch to bind, so Elasticsear

Elasticsearch Problem Lists(3): Spring Upgrade

In order to use the new features of Elasticsearch/Kibana, we decided to upgrade to newer version of Spring. Because the new version of Spring is not released version and not stable, we met some problems and bugs. This post records them for future reader. In last blog, we have specified that the version of Spring Boot Starter is 1.5.3, we have to upgrade to 2.0.0.M3 to support ELK 5.x. Add Repo The first step is to update the maven dependency. Because the new version of Spring Boot that supports the ES 5.x is not released, we have to add customized repositories to download pom and jar. < repository > < id > spring-milestone </ id > < name > spring-milestone </ name > < url > http://repo.spring.io/milestone/ </ url > </ repository > Except the code repo, we also need update the plugin repo for we used the Spring Boot maven plugin: < plugin > < groupId > org.springframework.boot </ groupId >

Logstash Learning (3): Application

In this blog, we will apply Logstash in our project. In this process, we first choose the tech stack, then solve the problem one by one. Design Because there exists many different types of input source, we have to choose the one that suitable for our system to use: business application directly write to Logstash by socket/http parse log file using Filebeat, then send to Logstash via message queue: message queue used as buffer Taking the availability and data consistency into account, we decided to use the following data flow: business app -> slf4j & logback -> file -> filebeat -> logstash -> ES -> kibana Source: SLF4J & Logback SLF4J & Logback are very powerful log library for Java that we have introduced some features of it here . Log Pattern In our scenario, we customize our project as following, which is very similar to the Spring Boot’s default logging pattern: logging.pattern. file = %d %5p --- [ %t ] %- 40.40 c{