跳至主要内容

Logstash Learning (1): Basic

Today, we will learn some basic concepts of Logstash, which is a important component in ELK and is responsible for log aggregation. This serial will include following topics:

Concepts

Logstash has three main conponents: input, filter and output. It is designed to follow the principle of loose coupling between components. So, it adopt the Pipe and Filter design patterns, making the plugins of Logstash very easy to be added or removed in execution pipeline of log.

Pipeline Configuration

When using Logstash to handle logs, we use pipeline to define the flow of logs. A simple pipeline configuration file can be looked like following:

input {
    stdin {}
}

output {
    stdout {}
}

This config only defines input and output components, because filter is a optional component.

There exists many different plugins can be used in different components and we will introduce them in next blog.

Batch & Buffer

In order to improve the performance of log event handling, Logstash adds the help of batch and buffer.

Batch Worker

When fetching events from data source, Logstash uses batch, which will collect from inputs before attempting to execute its filters and outputs. The maximum number of events an individual worker thread collected in a batch can be set by configuration. If some events stay too long, i.e. exceeds the limits of settings, it will still handle it.

Buffer Queue

Logstash also used a internal queuing model for event buffering. We can specify memory for legacy in-memory based queuing, or persisted for disk-based ACKed queueing.

Data Resiliency

Persistent Queue

In order to protect against data loss during abnormal termination, Logstash has a persistent queue.

Persistent queues are also useful for Logstash deployments that need large buffers. Instead of deploying and managing a message broker, such as Redis, RabbitMQ, or Apache Kafka, to facilitate a buffered publish-subscriber model, we can enable persistent queues to buffer events on disk and remove the message broker.

The data flow changes to following after we enable the persistent queue in Logstash.

input → queue → filter + output

And the internal persisted queue is implemented by checkpoint files and append only pages:

First, the queue itself is a set of pages. There are two kinds of pages: head pages and tail pages. The head page is where new events are written. There is only one head page. When the head page is of a certain size (see queue.page_capacity), it becomes a tail page, and a new head page is created. Tail pages are immutable, and the head page is append-only. Second, the queue records details about itself (pages, acknowledgements, etc) in a separate file called a checkpoint file.

Limitations

The following are problems not solved by the persistent queue feature:

  • Input plugins that do not use a request-response protocol cannot be protected from data loss. For example: tcp, udp, zeromq push+pull, and many other inputs do not have a mechanism to acknowledge receipt to the sender. Plugins such as beats and http, which do have an acknowledgement capability, are well protected by this queue.
  • It does not handle permanent machine failures such as disk corruption, disk failure, and machine loss. The data persisted to disk is not replicated.

Dead Letter Queue

By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event.

In order to protect against data loss in this situation, we can configure Logstash to write unsuccessful events to a dead letter queue instead of dropping them.

More Features

Logstash can be used in scenarios like followings:

  • Operational Log
  • Metrics
  • Security Analytics

In order to support different situations, Logstash has more quality features:

  • Scalability: Beats can be used to load balance across a group of Logstash nodes.
  • Availability:
    • A minimum of two Logstash nodes are recommended for high availability.
    • Logstash’s adaptive buffering capabilities will facilitate smooth streaming even through variable throughput loads.
  • Resiliency:
    • Beats with request response protocol
    • Persistent queue
  • Secure Transport:
    • Wire encryption
    • Security options when communicating with Elasticsearch

Message Queue

When it comes to message queues usage, Logstash document suggests us not using if we just want the it as buffering layer:

For existing users who are utilizing an external queuing layer like Redis or RabbitMQ just for data buffering with Logstash, it’s recommended to use Logstash persistent queues instead of an external queuing layer. This will help with overall ease of management by removing an unnecessary layer of complexity in your ingest architecture.

If we already used Kafka as data hub, we can integrate Beats and Logstash easily:

For users who want to integrate data from existing Kafka deployments or require the underlying usage of ephemeral storage, Kafka can serve as a data hub where Beats can persist to and Logstash nodes can consume from.
The other TCP, UDP, and HTTP sources can persist to Kafka with Logstash as a conduit to achieve high availability in lieu of a load balancer.

Ref

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi

Elasticsearch: Join and SubQuery

Elasticsearch: Join and SubQuery Tony was bothered by the recent change of search engine requirement: they want the functionality of SQL-like join in Elasticsearch! “They are crazy! How can they think like that. Didn’t they understand that Elasticsearch is kind-of NoSQL 1 in which every index should be independent and self-contained? In this way, every index can work independently and scale as they like without considering other indexes, so the performance can boost. Following this design principle, Elasticsearch has little related supports.” Tony thought, after listening their requirements. Leader notice tony’s unwillingness and said, “Maybe it is hard to do, but the requirement is reasonable. We need to search person by his friends, didn’t we? What’s more, the harder to implement, the more you can learn from it, right?” Tony thought leader’s word does make sense so he set out to do the related implementations Application-Side Join “The first implementation

Implement isdigit

It is seems very easy to implement c library function isdigit , but for a library code, performance is very important. So we will try to implement it and make it faster. Function So, first we make it right. int isdigit ( char c) { return c >= '0' && c <= '9' ; } Improvements One – Macro When it comes to performance for c code, macro can always be tried. #define isdigit (c) c >= '0' && c <= '9' Two – Table Upper version use two comparison and one logical operation, but we can do better with more space: # define isdigit(c) table[c] This works and faster, but somewhat wasteful. We need only one bit to represent true or false, but we use a int. So what to do? There are many similar functions like isalpha(), isupper ... in c header file, so we can combine them into one int and get result by table[c]&SOME_BIT , which is what source do. Source code of ctype.h : # define _ISbit(bit) (1 << (