Skip to content

Instantly share code, notes, and snippets.

@rasheedamir
Last active May 31, 2023 09:19
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save rasheedamir/82e06a2754ca174e921d to your computer and use it in GitHub Desktop.
Save rasheedamir/82e06a2754ca174e921d to your computer and use it in GitHub Desktop.
Log Management: elastic-search, logstash & kibana (ELK)!

The Log!

Setup a Log Management Solution with the ELK Stack

Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana 3 is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch. Elasticsearch, Logstash, and Kibana, when used together is known as an ELK stack.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

Logstash is a tool for receiving, processing and outputting logs. All kinds of logs. System logs, webserver logs, error logs, application logs and just about anything you can throw at it. Sounds great, eh?

Using Elasticsearch as a backend datastore, and kibana as a frontend reporting tool, Logstash acts as the workhorse, creating a powerful pipeline for storing, querying and analyzing your logs. With an arsenal of built-in inputs, filters, codecs and outputs, you can harness some powerful functionality with a small amount of effort.

Logstash is a tool for managing your logs.

It helps you take logs and other event data from your systems and move it into a central place. logstash is open source and completely free

Landscape: Single Machine Setup

This assumes that ELK stack and the applications whose logs are to be monitored are all installed on same machine. But it's pretty easy to move out to different landscape where we have one logstash-server and then logstash-forwarders which are installed on individual applications servers; then we can have true central logging.

Our setup will have following four main components:

  • Logstash: The component that processes logs
  • Elasticsearch: Stores all of the logs
  • Kibana: Web interface for searching and visualizing logs

Install ElasticSearch

  • Step 1: The best package to download for Ubuntu is the deb package. Grab the deb package by running:
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.1.deb
  • Step 2: Installing directly from a Debian package is done by running:
sudo dpkg -i elasticsearch-1.4.1.deb

This results in Elasticsearch being properly installed in /usr/share/elasticsearch. Recall that installing from the Debian package also installs an init script in /etc/init.d/elasticsearch that starts the Elasticsearch server running on boot. The server will also be immediately started after installation.

  • Step 3: Locate the configuration files:

If installing from the Debain package, configuration files are found in /etc/elasticsearch.

There will be two main configuration files: elasticsearch.yml and logging.yml. The first configures the Elasticsearch server settings, and the latter, unsurprisingly, the logger settings used by Elasticsearch.

"elasticsearch.yml" will, by default, contain nothing but comments.

"logging.yml" provides configuration for basic logging. You can find the resulting logs in /var/log/elasticsearch.

  • Step 4: Remove ElasticSearch public access

Before continuing, you will want to configure Elasticsearch so it is not accessible to the public Internet--Elasticsearch has no built-in security and can be controlled by anyone who can access the HTTP API. This can be done by editing elasticsearch.yml.

sudo nano /etc/elasticsearch/elasticsearch.yml

Then find the line that specifies network.bind_host, then uncomment it and change the value to localhost so it looks like the following:

network.bind_host: localhost

Then insert the following line somewhere in the file, to disable dynamic scripts:

script.disable_dynamic: true

Save and exit. Now restart Elasticsearch to put the changes into effect:

sudo service elasticsearch restart
  • Step 5: Test your Elasticsearch install

Elasticsearch should now be running on port 9200. Do note that Elasticsearch takes some time to fully start, so running the curl command below immediately might fail. It shouldn't take longer than ten seconds to start responding, so if the below command fails, something else is likely wrong.

Ensure the server is started by running:

curl -X GET 'http://localhost:9200'

You should see the following response

{
  "ok" : true,
  "status" : 200,
  "name" : "Xavin",
  "version" : {
    "number" : "1.4.1",
    "build_hash" : "36897d07dadcb70886db7f149e645ed3d44eb5f2",
    "build_timestamp" : "2013-11-13T12:06:54Z",
    "build_snapshot" : false,
    "lucene_version" : "4.5.1"
  },
  "tagline" : "You Know, for Search"
}

If you see a response similar to the one above, Elasticsearch is working properly.

  • Step 6: How to start & stop Elasticsearch?
service elasticsearch stop
service elasticsearch start

Install Kibana

Kibana 3 is a web interface that can be used to search and view the logs that Logstash has indexed.

  • Step 1: Download Kibana to your home directory with the following command:
cd ~; wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz
  • Step 2: Extract Kibana archive with tar:
tar xvf kibana-3.1.2.tar.gz
  • Step 3: Open the Kibana configuration file for editing:
sudo nano ~/kibana-3.1.2/config.js

In the Kibana configuration file, find the line that specifies the elasticsearch, and replace the port number (9200 by default) with 9494:

elasticsearch: "http://"+window.location.hostname+":9494",

This is necessary because we are planning on accessing Kibana on port 80 (i.e. http://logstash_server_public_ip:9494/).

  • Step 4: We will be using Nginx to serve our Kibana installation, so let's move the files into an appropriate location. Create a directory with the following command:
sudo mkdir -p /var/www/kibana3
  • Step 5: Now copy the Kibana files into your newly-created directory:
sudo cp -R ~/kibana-3.1.2/* /var/www/kibana3/

Before we can use the Kibana web interface, we have to install Nginx. Let's do that now.

Install Nginx

  • Step 1: Use apt to install Nginx:
sudo apt-get install nginx

Because of the way that Kibana interfaces the user with Elasticsearch (the user needs to be able to access Elasticsearch directly), we need to configure Nginx to proxy the port 80 requests to port 9200 (the port that Elasticsearch listens to by default). Luckily, Kibana provides a sample Nginx configuration that sets most of this up.

  • Step 2: Download the sample Nginx configuration from Kibana's github repository to your home directory:
cd ~; wget https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf
  • Step 3: Open the sample configuration file for editing:
nano nginx.conf

Find and change the values of the server_name to your FQDN (or localhost if you aren't using a domain name) and root to where we installed Kibana, so they look like the following entries:

listen  *:9494 ;

server_name localhost;

root /var/www/kibana3;

Save and exit.

  • Step 4: Now copy it over your Nginx default server block with the following command:
sudo cp nginx.conf /etc/nginx/sites-available/default
  • Step 5: Now we will install apache2-utils so we can use htpasswd to generate a username and password pair:
sudo apt-get install apache2-utils
  • Step 6: Then generate a login that will be used in Kibana to save and share dashboards (substitute your own username):
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd admin
  • Step 7: Then enter a password (admin) and verify it. The htpasswd file just created is referenced in the Nginx configuration that you recently configured.

  • Step 8: Now restart Nginx to put our changes into effect:

sudo service nginx restart

Kibana is now accessible via your public IP address of your server i.e. http://server_public_ip:9494/. If you go there in a web browser, you should see a Kibana welcome page which will allow you to view dashboards but there will be no logs to view because Logstash has not been set up yet. Let's do that now.

Install LogStash

Logstash is an open source tool for collecting, parsing, and storing logs for future use.

  • Step 1: The Logstash package is available from the same repository as Elasticsearch, and we already installed that public key, so let's create the Logstash source list:
echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
  • Step 2: Update your apt package database:
sudo apt-get update
  • Step 3: Install Logstash with this command:
sudo apt-get install logstash=1.4.2-1-2c0f5a1

Logstash is now installed

  • Step 4: Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

Here is a sample logstash-json.conf:

Let's create a configuration file called logstash-json.conf:

sudo nano /etc/logstash/conf.d/01-lumberjack-input.conf

Insert the following input & output configuration:

input {
  file {
    path =>  [ "/tmp/fmu-admin/fmu.log.json" , "/tmp/fk-admin/fk.log.json"]
    codec =>   json {
      charset => "UTF-8"
    }
  }
}

output {
  elasticsearch { host => "localhost" }
  stdout { codec => rubydebug }
}

Save and quit.

path => [ "/tmp/fmu-admin/fmu.log.json" , "/tmp/fk-admin/fk.log.json"] this is array of files which are being produced by two different applications fmu-admin and fk-admin at different locations but in JSON format. It will assume that files are generated using logback JSON encoder; which has been described below.

This output basically configures Logstash to store the logs in Elasticsearch.

  • Step 5: Restart Logstash to put our configuration changes into effect:
sudo service logstash restart

Now that our Logstash Server is ready!

JSON Logback Encoder

Given a spring-boot based application then please do following enhancements:

  • Step 1: Add following dependency in pom.xml.
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>2.5</version>
</dependency>
  • Step 2: Add following appender in the logback.xml
    <appender name="JSON" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>${logback.loglevel}</level>
        </filter>
        <encoder>
            <pattern>${FILE_LOG_PATTERN}</pattern>
        </encoder>
        <file>${LOG_FILE}.json</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <fileNamePattern>${LOG_FILE}.%i</fileNamePattern>
        </rollingPolicy>
        <triggeringPolicy
                class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <MaxFileSize>1MB</MaxFileSize>
        </triggeringPolicy>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <includeCallerInfo>true</includeCallerInfo>
            <customFields>{"appname":"${pom.artifactId}","version":"${pom.version}"}</customFields>
        </encoder>
    </appender>

${logback.loglevel} : Ensure it's set either in properties section of pom.xml or in application.properties

  • Step 3: Specify the log file location

In application.properties specify the log file location e.g.

logging.file=/tmp/fk-admin/fk.log

Landscape: Multi-Machine Setup

ELK Stack on Ubuntu 14.04 with Central Server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment