Understanding ELK: Elastic, Logstash, and Kibana – Part 1

data
geralt / Pixabay

 

Useful links:

pfSense Forum Topic

pfelk.3ilson.com (doesn’t seem to be maintained)

I currently work for a big data company.  We take machine data, or digital exhaust, and turn it into searchable, intelligible, information for analytics and visualization.  As an vendor knows, there are always competitors whose names come up often in conversation.  One such competitor, Elastic, has been popping up on my radar as of late, and I wanted to do some personal research into their offerings.  The company develops and offers a set of products for ingesting, forwarding, and visualizing data. As part of my research, I downloaded and installed three of the products offered.

Elastic search is the part of the suite that indexes data to disk for later searching. Elastic search can be configured to receive data directly from end points.  It also offers the ability to search from the command line. However, it is very utilitarian in nature and it certainly comes with a steep learning curve.  A data scientist could have a field day with an application like this, but the average analyst would not know where to begin without some type of GUI with which to interact.  This is where other portions of the suite pick up.  Logstash and Kibana offer a lot to the suite in the way of interaction, both on the ingest side of things, as well as the visualization.

Logstash handles the “log shipping” functionality for the suite.  Logstash accepts feeds from endpoints and passes them on to Elastic search for indexing.  Logstash provides the functionality of processing the data before it reaches the indices.  Through the use of input, filter, and output plugins, Logstash provides a method for capturing and modifying metadata for indexed data.  For instance, Logstash will allow an administrator to specify the indices which data will be passed to in Elastic search for final indexing. Through the use of grok filters, an administrator can match fields within data sets to fields which will become available for searches against the Elastic search indices. I hope to provide some more detail on this particular aspect in a later post.

Kibana is what pulls all of the suite together.  It is a robust web GUI for the suite and provides the ability to perform searches in a much easier way.  There is a search box at the top of the GUI where one can enter the desired search criteria and filter results from Elastic search.  There is a selector for the preferred index to be searched as well as a time selector with predefined periods of time by which to limit the search.  Within this interface, an administrator can create visualizations and dashboards, which are basically collections of visualizations on one screen. Some administration is available from within Kibana, but I have found that there are better tools with which to manage indices and other portions of the Elastic ecosystem.  One such tool is Cerebro, which can be downloaded from GitHub.  I will cover that in more detail as I gain more experience with it.

In my next post, I hope to have some sample dashboards to show which will further detail some of what I have shared in this post.  There are some things that I like very much about ELK, such as the open source nature, but there are some cons that I will share in the future as well.  Primarily, the biggest con is the steep learning curve which I believe exists for a tool of this nature.  However, there are many more pros and cons which deserve more conversation in the future.  I would love to hear your comments regarding ELK in the comments below.  If you have any hints or tips to share, please do so for the community’s benefit.

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress spam blocked by CleanTalk.