跳至主要内容

Logstash Learning (1): Basic

Today, we will learn some basic concepts of Logstash, which is a important component in ELK and is responsible for log aggregation. This serial will include following topics:

Concepts

Logstash has three main conponents: input, filter and output. It is designed to follow the principle of loose coupling between components. So, it adopt the Pipe and Filter design patterns, making the plugins of Logstash very easy to be added or removed in execution pipeline of log.

Pipeline Configuration

When using Logstash to handle logs, we use pipeline to define the flow of logs. A simple pipeline configuration file can be looked like following:

input {
    stdin {}
}

output {
    stdout {}
}

This config only defines input and output components, because filter is a optional component.

There exists many different plugins can be used in different components and we will introduce them in next blog.

Batch & Buffer

In order to improve the performance of log event handling, Logstash adds the help of batch and buffer.

Batch Worker

When fetching events from data source, Logstash uses batch, which will collect from inputs before attempting to execute its filters and outputs. The maximum number of events an individual worker thread collected in a batch can be set by configuration. If some events stay too long, i.e. exceeds the limits of settings, it will still handle it.

Buffer Queue

Logstash also used a internal queuing model for event buffering. We can specify memory for legacy in-memory based queuing, or persisted for disk-based ACKed queueing.

Data Resiliency

Persistent Queue

In order to protect against data loss during abnormal termination, Logstash has a persistent queue.

Persistent queues are also useful for Logstash deployments that need large buffers. Instead of deploying and managing a message broker, such as Redis, RabbitMQ, or Apache Kafka, to facilitate a buffered publish-subscriber model, we can enable persistent queues to buffer events on disk and remove the message broker.

The data flow changes to following after we enable the persistent queue in Logstash.

input → queue → filter + output

And the internal persisted queue is implemented by checkpoint files and append only pages:

First, the queue itself is a set of pages. There are two kinds of pages: head pages and tail pages. The head page is where new events are written. There is only one head page. When the head page is of a certain size (see queue.page_capacity), it becomes a tail page, and a new head page is created. Tail pages are immutable, and the head page is append-only. Second, the queue records details about itself (pages, acknowledgements, etc) in a separate file called a checkpoint file.

Limitations

The following are problems not solved by the persistent queue feature:

  • Input plugins that do not use a request-response protocol cannot be protected from data loss. For example: tcp, udp, zeromq push+pull, and many other inputs do not have a mechanism to acknowledge receipt to the sender. Plugins such as beats and http, which do have an acknowledgement capability, are well protected by this queue.
  • It does not handle permanent machine failures such as disk corruption, disk failure, and machine loss. The data persisted to disk is not replicated.

Dead Letter Queue

By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event.

In order to protect against data loss in this situation, we can configure Logstash to write unsuccessful events to a dead letter queue instead of dropping them.

More Features

Logstash can be used in scenarios like followings:

  • Operational Log
  • Metrics
  • Security Analytics

In order to support different situations, Logstash has more quality features:

  • Scalability: Beats can be used to load balance across a group of Logstash nodes.
  • Availability:
    • A minimum of two Logstash nodes are recommended for high availability.
    • Logstash’s adaptive buffering capabilities will facilitate smooth streaming even through variable throughput loads.
  • Resiliency:
    • Beats with request response protocol
    • Persistent queue
  • Secure Transport:
    • Wire encryption
    • Security options when communicating with Elasticsearch

Message Queue

When it comes to message queues usage, Logstash document suggests us not using if we just want the it as buffering layer:

For existing users who are utilizing an external queuing layer like Redis or RabbitMQ just for data buffering with Logstash, it’s recommended to use Logstash persistent queues instead of an external queuing layer. This will help with overall ease of management by removing an unnecessary layer of complexity in your ingest architecture.

If we already used Kafka as data hub, we can integrate Beats and Logstash easily:

For users who want to integrate data from existing Kafka deployments or require the underlying usage of ephemeral storage, Kafka can serve as a data hub where Beats can persist to and Logstash nodes can consume from.
The other TCP, UDP, and HTTP sources can persist to Kafka with Logstash as a conduit to achieve high availability in lieu of a load balancer.

Ref

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi...

LevelDB Source Reading (4): Concurrent Access

In this thread, we come to the issue of concurrent access of LevelDB. As a database, it can be concurrently accessed by users. But, it wouldn’t be easy to provide high throughput under product load. What effort does LevelDB make to achieve this goal both in design and implementation? Goal of Design From this github issue , we can see LevelDB is designed for not allowing multi-process access. this (supporting multiple processes) doesn’t seem like a good feature for LevelDB to implement. They believe let multiple process running would be impossible to share memory/buffer/cache, which may affect the performance of LevelDB. In the case of multiple read-only readers without altering the code base, you could simply copy the file for each reader. Yes, it will be inefficient (though not on file systems that dedupe data), but then again, so would having multiple leveldb processes running as they wouldn’t be able to share their memory/buffer/etc. They achieve it by adding a l...

LevelDB Source Reading (3): Compaction

LevelDB Source Reading (3): Compaction In the last blog that analyzes read/write process of Leveldb, we can see writing only happens to log file and memory table, then it relies on the compaction process to move the new updates into persistent sorted table for future use. So the compaction is a crucial part for the design, and we will dive into it in this blog. Compaction LevelDB compacts its underlying storage data in the background to improve read performance. The upper sentence is cited from the document of Leveldb , and we will see how it is implemented via code review. Background compaction // db_impl.cc void DBImpl :: MaybeScheduleCompaction ( ) { // use background thread to run compaction env_ - > Schedule ( & DBImpl :: BGWork , this ) ; } Two main aspects // arrange background compaction when Get, Open, Write void DBImpl :: BackgroundCompaction ( ) { // compact memtable CompactMemTable ( ) ; // compact ...