Today, we will learn some basic concepts of Logstash, which is a important component in ELK and is responsible for log aggregation. This serial will include following topics:
- Introduction: this post
- Configuration
- Application
Concepts
Logstash has three main conponents: input, filter and output. It is designed to follow the principle of loose coupling between components. So, it adopt the Pipe and Filter
design patterns, making the plugins of Logstash very easy to be added or removed in execution pipeline of log.
Pipeline Configuration
When using Logstash to handle logs, we use pipeline to define the flow of logs. A simple pipeline configuration file can be looked like following:
input {
stdin {}
}
output {
stdout {}
}
This config only defines input and output components, because filter is a optional component.
There exists many different plugins can be used in different components and we will introduce them in next blog.
Batch & Buffer
In order to improve the performance of log event handling, Logstash adds the help of batch and buffer.
Batch Worker
When fetching events from data source, Logstash uses batch, which will collect from inputs before attempting to execute its filters and outputs. The maximum number of events an individual worker thread collected in a batch can be set by configuration. If some events stay too long, i.e. exceeds the limits of settings, it will still handle it.
Buffer Queue
Logstash also used a internal queuing model for event buffering. We can specify memory for legacy in-memory based queuing, or persisted for disk-based ACKed queueing.
Data Resiliency
Persistent Queue
In order to protect against data loss during abnormal termination, Logstash has a persistent queue.
Persistent queues are also useful for Logstash deployments that need large buffers. Instead of deploying and managing a message broker, such as Redis, RabbitMQ, or Apache Kafka, to facilitate a buffered publish-subscriber model, we can enable persistent queues to buffer events on disk and remove the message broker.
The data flow changes to following after we enable the persistent queue in Logstash.
input → queue → filter + output
And the internal persisted queue is implemented by checkpoint files and append only pages:
First, the queue itself is a set of pages. There are two kinds of pages: head pages and tail pages. The head page is where new events are written. There is only one head page. When the head page is of a certain size (see queue.page_capacity), it becomes a tail page, and a new head page is created. Tail pages are immutable, and the head page is append-only. Second, the queue records details about itself (pages, acknowledgements, etc) in a separate file called a checkpoint file.
Limitations
The following are problems not solved by the persistent queue feature:
- Input plugins that do not use a request-response protocol cannot be protected from data loss. For example: tcp, udp, zeromq push+pull, and many other inputs do not have a mechanism to acknowledge receipt to the sender. Plugins such as beats and http, which do have an acknowledgement capability, are well protected by this queue.
- It does not handle permanent machine failures such as disk corruption, disk failure, and machine loss. The data persisted to disk is not replicated.
Dead Letter Queue
By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event.
In order to protect against data loss in this situation, we can configure Logstash to write unsuccessful events to a dead letter queue instead of dropping them.
More Features
Logstash can be used in scenarios like followings:
- Operational Log
- Metrics
- Security Analytics
In order to support different situations, Logstash has more quality features:
- Scalability: Beats can be used to load balance across a group of Logstash nodes.
- Availability:
- A minimum of two Logstash nodes are recommended for high availability.
- Logstash’s adaptive buffering capabilities will facilitate smooth streaming even through variable throughput loads.
- Resiliency:
- Beats with request response protocol
- Persistent queue
- Secure Transport:
- Wire encryption
- Security options when communicating with Elasticsearch
Message Queue
When it comes to message queues usage, Logstash document suggests us not using if we just want the it as buffering layer:
For existing users who are utilizing an external queuing layer like Redis or RabbitMQ just for data buffering with Logstash, it’s recommended to use Logstash persistent queues instead of an external queuing layer. This will help with overall ease of management by removing an unnecessary layer of complexity in your ingest architecture.
If we already used Kafka as data hub, we can integrate Beats and Logstash easily:
For users who want to integrate data from existing Kafka deployments or require the underlying usage of ephemeral storage, Kafka can serve as a data hub where Beats can persist to and Logstash nodes can consume from.
The other TCP, UDP, and HTTP sources can persist to Kafka with Logstash as a conduit to achieve high availability in lieu of a load balancer.
Ref
Written with StackEdit.
评论
发表评论