Skip to content

Instantly share code, notes, and snippets.

@ashrithr
Last active December 7, 2022 01:47
Show Gist options
  • Save ashrithr/c5c03950ef631ac63c43 to your computer and use it in GitHub Desktop.
Save ashrithr/c5c03950ef631ac63c43 to your computer and use it in GitHub Desktop.
Installing ELK on a single machine

Installing ELK (CentOS)

This is a short step-by-step guide on installing ElasticSearch LogStash and Kibana Stack on a CentOS environment to gather and analyze logs.

I. Install JDK

rpm -ivh https://dl.dropboxusercontent.com/u/5756075/jdk-7u45-linux-x64.rpm

II. Install & Configure ElasticSearch

Add repository

rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch
cat > /etc/yum.repos.d/elasticsearch.repo <<EOF
[elasticsearch-1.3]
name=Elasticsearch repository for 1.3.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.3/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
EOF

Install ElasticSearch

yum -y install elasticsearch

Configure ElasticSearch

  1. Increase the openfile limits to elasticsearch by:

    echo 'elasticsearch soft nofile 32000' >> /etc/security/limits.conf
    echo 'elasticsearch hard nofile 32000' >> /etc/security/limits.conf
    
  2. Configure elasticsearch data storage path

    echo 'path.data: /data/es/logs' >> /etc/elasticsearch/elasticsearch.yml
    mkdir -p /data/es/logs
    chown -R elasticsearch:elasticsearch /data/es/logs
    
  3. Disallow elasticsearch process from swapping (try to lock the process address space into RAM)

    sed -i "s|^# bootstrap.mlockall:.*$|bootstrap.mlockall: true|" /etc/elasticsearch/elasticsearch.yml
    
  4. Change the JVM Size

    sed -i "s|^#ES_HEAP_SIZE=.*$|ES_HEAP_SIZE=4g|" /etc/sysconfig/elasticsearch
    

    NOTE: Make sure you have enough RAM on the machine before bumping up the value of the ElasticSearch Deamon's JVM Heap Size and make changes accordingly.

  5. Start ElasticSearch

    service elasticsearch start
    

III. Install & Configure Kibana

  1. Download Kibana

    cd /opt
    wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz
    tar xzf kibana-3.1.0.tar.gz
    ln -s kibana-3.1.0 kibana
    
  2. Install Nginx

    rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
    yum -y install nginx
    
  3. Configure Nginx to server kibana

    mkdir -p /usr/share/nginx/kibana3
    cp -R /opt/kibana/* /usr/share/nginx/kibana3/
    
  4. Download sample nginx config:

    cd ~; curl -OL https://raw.githubusercontent.com/elasticsearch/kibana/kibana3/sample/nginx.conf
    sed -i "s|kibana.myhost.org|$(hostname -f)|" nginx.conf
    sed -i "s|root.*/usr/share/kibana3;|root /usr/share/nginx/kibana3;|" nginx.conf
    cp ~/nginx.conf /etc/nginx/conf.d/default.conf
    

    NOTE: If you don't find the sample nginx.conf try this: https://github.com/elasticsearch/kibana/blob/kibana3/sample/nginx.conf, it generally should be laying around in some other branch of kibana.

  5. Install apache2-utils to generate username and password pair

    yum -y install httpd-tools-2.2.15
    htpasswd -c /etc/nginx/conf.d/$(hostname -f).htpasswd admin
    
  6. Start nginx for serving kibana and to make sure that kibana is available after reboot's

    service nginx start
    chkconfig nginx on
    

IV. Install & Configure LogStash

Add Repository

cat > /etc/yum.repos.d/logstash.repo <<EOF
[logstash-1.4]
name=logstash repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
EOF

Install logstash

yum -y install logstash logstash-contrib

Generating SSL Certificates

Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server.

Generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command:

cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.

Configure logstash

cat > /etc/logstash/conf.d/01-lumberjack-input.conf <<EOF
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
EOF

This specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier.

Now lets create another config file, where we will add a filter for syslog messages:

cat > /etc/logstash/conf.d/10-syslog.conf <<EOF
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
EOF

This filter looks for logs that are labeled as "syslog" type (by a Logstash Forwarder), and it will try to use "grok" to parse incoming syslog logs to make it structured and query-able.

Now lets create another config file to tell logstash to store logs in elasticsearch.

cat > /etc/logstash/conf.d/30-lumberjack-output.conf <<EOF
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
EOF

Start logstash

service logstash start

V. Setup Logstash Forwarder

Note: Do these steps for each server that you want to send logs to your Logstash Server.

Copy SSL certificate to logstash forwarder agents from logstash server:

scp /etc/pki/tls/certs/logstash-forwarder.crt [user]@[server]:/tmp

NOTE: Replace [user] and [server] with the username you have access to ssh into the logstash agents and the server with hostname/ip-address of logstash agent

Install logstash forwarder

rpm -ivh http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm

Install logstash forwarder init script

cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
chmod +x logstash-forwarder
cat > /etc/sysconfig/logstash-forwarder <<EOF
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
EOF
cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs

Configure logstash forwarder

LS_SERVER=[LOGSTASH_SERVER_FQDN]
cat > /etc/logstash-forwarder <<EOF
{
  "network": {
    "servers": [ "${LS_SERVER}:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages",
        "/var/log/secure"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}
EOF

NOTE: Be sure to replace [LOGSTASH_SERVER_FQDN] with the FQDN of your logstash server

@zzzuzik
Copy link

zzzuzik commented Sep 29, 2014

thanks for the timesaver!
for Kibana - service nginx start

@ashrithr
Copy link
Author

@zzzuzik thanks for the update will correct it

@ChaitanyaSamala
Copy link

The best document I found for logstash
Very helpful in understanding and simple way all put together

@rhoekstra
Copy link

Hi, nice quickstart guide !!

A few adjustments / improvements from my part to make it smoothly:

  • Elasticsearch:
    -- Fourth '1.' bullet (maybe the headers should increase) :
    I don't do this step on a virtual machine for testing purposes, you could point that out for people (like me at first) just blindly copy-pasting and then wondering why it doesn't work :). < I was after a real quick get it p and running and was in 'execute, not interpret' mode. >
  • Install and configure Kibana:
    --Bullet 4: I used the proposed 'raw' alternative URL you provided and altered the 'sed' commands
curl -OL https://raw.githubusercontent.com/elasticsearch/kibana/kibana3/sample/nginx.conf
sed -i "s|kibana.myhost.org|$(hostname -f)|" nginx.conf
sed -i "s|root.*/usr/share/kibana3;|root /usr/share/nginx/kibana3;|" nginx.conf
cp ~/nginx.conf /etc/nginx/conf.d/default.conf

-- Bullet 5: use effective hostname for the htpasswd filename

htpasswd -c /etc/nginx/conf.d/$(hostname -f).htpasswd admin

-- Bullet 6: To make nginx start on system reboot, add:

chkconfig nginx on

Then I have the issue that:
firewall is open (from outside as well as on localhost), I can contact nginx on port 80, I can contact elasticsearch on port 9200 (using the directives in the config.js, from both my workstation (Macbook pro, holding the vm) and the vm itself (so both can connect to 'http://<vm fqdn>:9200' successfully')

Still, the kibana page shows the error:
Error Could not contact Elasticsearch at http://<vm fqdn>:9200. Please ensure that Elasticsearch is reachable from your system.

Note that I purposely replaced the fqdn with the directive '<vm fqdn>'.

Do you have any advice?

@rhoekstra
Copy link

Additionally:
Make logstash-forwarder config on logstash server, so '$(hostname -f)' will produce the right hostname, after which it could be scp'ed along with the certificate

@ashrithr
Copy link
Author

ashrithr commented Jan 7, 2015

@rhoekstra thanks for taking your time to suggest improvements, I have updated the guide to accommodate most of your improvements.

@Movah
Copy link

Movah commented Feb 4, 2015

Nice job... I just run into one problem, when I go to the URL after everything is setup I only get the Welcome to nginx! welcome screen.

I guess i'm missing something in the nfinx config file ?

@Artistan
Copy link

My install also errors in kibana...

"Still, the kibana page shows the error:
Error Could not contact Elasticsearch at http://:9200. Please ensure that Elasticsearch is reachable from your system."

@nexxomxu
Copy link

nexxomxu commented Apr 8, 2015

How to fix these errors? Did you find solution?

@bit-smacker
Copy link

Great guide! So many others were confusing, broken, or incomplete, but these steps worked for me on the first try. Thanks!

FYI: You can search through the packages XML file for the current logstash-forwarder version number used in step V.
http://packages.elasticsearch.org/

Also, I received the 9200 connection error after rebooting the server. Elasticsearch was not configured to automatically start. After I started it, the error is no longer displayed.

chkconfig elasticsearch on
service elasticsearch start

@ajitpsonawane
Copy link

Hey,its too good Doc for elk installation

@vivekjuneja
Copy link

Not sure if @Artistan or @bit-smacker have solved the problem yet. Please take note of the following.

If Kibana is required to be accessed remotely, then we would need to configure elasticsearch to enable CORS

echo 'http.cors.allow-origin: "/.*/"' >> /etc/elasticsearch/elasticsearch.yml
echo 'http.cors.enabled: true' >> /etc/elasticsearch/elasticsearch.yml 

@ssampati
Copy link

I have similar setup ELK on a single box, with this installation can we send syslogs from network devices without any forwarder??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment