Skip to content

Instantly share code, notes, and snippets.

@nickname55
Forked from AndreyBespamyatnov/readme.txt
Created April 14, 2018 18:28
Show Gist options
  • Save nickname55/3f457796fad7a74d43d88487436ba3ed to your computer and use it in GitHub Desktop.
Save nickname55/3f457796fad7a74d43d88487436ba3ed to your computer and use it in GitHub Desktop.
Editing Setup ELK Stack on Ubuntu
Elasticsearch is an enterprise level open source search server based on Apache Lucene, offers a real-time distributed search and analytics with a RESTful web interface and schema-free JSON documents. Elasticsearch is developed in java and is released under Apache License. Currently, it is ranked second in most popular enterprise search engine, behind Apache Solr.
This guide will help you to install the Elasticsearch on Ubuntu
== Prerequisites ==
Make sure you have the latest JDK installed on your system. If you don't have setup it:
<syntaxhighlight lang="bash">
sudo apt-get remove --purge openjdk*
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install oracle-java8-installer
java -version
</syntaxhighlight>
You have to see the Output:
<syntaxhighlight lang="bash">
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
</syntaxhighlight>
== Install Elasticsearch ==
Elasticsearch can be downloaded directly from the official website, more than that it offers a pre-built binary package for RHEL and Debian derivatives.
Download and install public signing key.
<syntaxhighlight lang="bash">
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
</syntaxhighlight>
Add and enable Elasticsearch repository.
<syntaxhighlight lang="bash">
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elk.list
</syntaxhighlight>
Install Elasticsearch by using the following command.
<syntaxhighlight lang="bash">
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
</syntaxhighlight>
== Configure Elasticsearch ==
Elasticsearch configuration files can be found in /etc/elasticsearch/ directory; you could see only two files in it, elasticsearch.yml and logging.yml.
logging.yml manages the logging of elasticsearch, and logs files are stored in /var/log/elasticsearch directory.
elasticsearch.yml is the main configuration file of elasticsearch, contains default settings for running production cluster.
Elasticsearch, by default, binds to all network cards (0.0.0.0), and listens on port no 9200 – 9300 for HTTP traffic and on 9300 – 9400 for internal node to node communication, ranges means that if the port is busy, it will automatically try the next port.
Edit elasticsearch.yml file.
<syntaxhighlight lang="bash">
sudo nano /etc/elasticsearch/elasticsearch.yml
</syntaxhighlight>
To make Elasticsearch listen on particular ip, place the ip address on the following syntax. To protect elasticsearch from public access, you can set it to listen on localhost.
<syntaxhighlight lang="bash">
### Listening on particular IPv4 ###
network.bind_host: 192.168.0.1
### Disabling public access ###
network.bind_host: 127.0.0.1
</syntaxhighlight>
Restart the Elasticsearch service.
<syntaxhighlight lang="bash">
service elasticsearch restart
</syntaxhighlight>
Once you restarted, wait for at least a minute to let the Elasticsearch get fully started. Otherwise, testing will fail. Elastisearch should be now listening on 9200 for processing HTTP request; we will use CURL to get the response.
<syntaxhighlight lang="bash">
curl -X GET 'http://localhost:9200'
</syntaxhighlight>
You should get the response like below
<syntaxhighlight lang="bash">
{
"name" : "gf5QYAn",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "S6gZNkMERpSr-MGXqEFUJw",
"version" : {
"number" : "5.5.2",
"build_hash" : "b2f0c09",
"build_date" : "2017-08-14T12:33:14.154Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
</syntaxhighlight>
== Install Logstash ==
Logstash is an open source tool, it collects the logs, parse and store them on Elasticsearch for searching. Over 160+ plugins are available for Logstash, which provides the capability of processing a different type of events with no extra work.
Install Logstash using the apt-get command
<syntaxhighlight lang="bash">
apt-get install -y logstash
</syntaxhighlight>
Configure Logstash
Logstash configuration can be found in /etc/logstash/conf.d/. If the files don’t exist, create a new one. logstash configuration file consists of three sections input, filter, and the output; all three sections can be found either in a single file or each section will have separate files end with .conf.
I recommend you to use a single file to placing input, filter and output sections.
<syntaxhighlight lang="bash">
nano /etc/logstash/conf.d/logstash.conf
</syntaxhighlight>
<syntaxhighlight lang="text">
input {
udp {
port => 5960
codec => plain {
charset => "UTF-8"
}
type => "log4net"
}
}
filter {
mutate {
add_field => [ "hostip", "%{host}" ]
add_field => [ "duration", 0]
}
dns {
reverse => [ "host" ]
action => replace
}
if [type] == "log4net" {
grok {
break_on_match => true
remove_field => message
match => {
message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} +- %{IPORHOST:tempHost} - %{DATA:applicati$
}
match => {
message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} +- %{IPORHOST:tempHost} - %{DATA:applicati$
}
match => {
message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} +- %{IPORHOST:tempHost} - %{DATA:applicati$
}
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "message" , "%{tempMessage}" ]
replace => [ "host" , "%{tempHost}" ]
}
}
mutate {
remove_field => [ "tempMessage" ]
remove_field => [ "tempHost" ]
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
}
stdout { codec => rubydebug }
}
</syntaxhighlight>
Now start and enable the logstash.
<syntaxhighlight lang="bash">
sudo systemctl start logstash
sudo systemctl enable logstash
</syntaxhighlight>
You can troubleshoot any issues by looking at below log.
<syntaxhighlight lang="bash">
cat /var/log/logstash/logstash-plain.log
</syntaxhighlight>
== Install & Configure Kibana ==
Kibana provides visualization of logs stored on the elasticsearch, download it from the official website or use the following command to setup repository.
<syntaxhighlight lang="bash">
sudo apt-get install -y kibana
</syntaxhighlight>
Edit the kibana.yml file.
<syntaxhighlight lang="bash">
sudo nano /etc/kibana/kibana.yml
</syntaxhighlight>
Install and configure Nginx as a reverse proxy
We will use Nginx as a reverse proxy to access Kibana from the public IP address. To install Nginx, run
<syntaxhighlight lang="bash">
sudo apt-get install nginx
</syntaxhighlight>
Create a basic authentication file with the openssl command
<syntaxhighlight lang="bash">
echo "admin:$(openssl passwd -apr1 YourStrongPassword)" | sudo tee -a /etc/nginx/htpasswd.kibana
</syntaxhighlight>
Generate a self signed ssl certificate:
Delete the default nginx virtual host:
<syntaxhighlight lang="bash">
sudo rm /etc/nginx/sites-enabled/default
</syntaxhighlight>
and create a virtual host configuration file for our Kibana instance
<syntaxhighlight lang="bash">
sudo nano /etc/nginx/sites-available/kibana
</syntaxhighlight>
<syntaxhighlight lang="text">
server {
listen 80 default_server;
server_name _;
return 301 https://$server_name$request_uri;
}
server {
listen 443 default_server ssl http2;
server_name _;
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_session_cache shared:SSL:10m;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.kibana;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
</syntaxhighlight>
Activate the server block by creating a symbolic link
<syntaxhighlight lang="bash">
sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/kibana
</syntaxhighlight>
Test the Nginx configuration and restart nginx
<syntaxhighlight lang="bash">
sudo nginx -t
sudo service nginx restart
</syntaxhighlight>
Start and enable kibana on system startup.
<syntaxhighlight lang="bash">
sudo systemctl start kibana
sudo systemctl enable kibana
</syntaxhighlight>
Access the Kibana using the following URL.
<syntaxhighlight lang="bash">
http://your-ip-address:5601/
</syntaxhighlight>
== Test with test Data ==
Im using the Log4Net and MSTest framework, it will be the easest way to test it
Create a new Test project, add NuGet the lo4net package and add this test class to the project
<syntaxhighlight lang="C#">
using System;
using log4net;
using log4net.Config;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace TestProject1.Tests.Log4net
{
[TestClass]
public class Log4NetLogstashIntegrrationTests
{
private static readonly ILog Log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
[TestInitialize]
public void RunBeforeAnyTests()
{
BasicConfigurator.Configure();
log4net.Config.XmlConfigurator.Configure();
}
[TestMethod]
public void Test()
{
Log.Info("Info message");
Log.Warn("Warning message");
Log.Error("Error message", new NotImplementedException("Test Error NotImplementedException", new Exception("Error message Exception")));
}
}
}
</syntaxhighlight>
Add a new file into the root folder of the new project with name 'app.config', and add new sections like below
<syntaxhighlight lang="xml">
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
</configSections>
<log4net>
<appender name="UdpAppender" type="log4net.Appender.UdpAppender">
<RemoteAddress value="192.168.127.128" />
<RemotePort value="5960" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level - %property{log4net:HostName} - PowerSesamm.Tests - %logger - %message%newline" />
</layout>
</appender>
<root>
<level value="ALL" />
<appender-ref ref="UdpAppender" />
</root>
</log4net>
</configuration>
</syntaxhighlight>
The output of the test, before parsing using a Logstash should be like that:
<syntaxhighlight lang="text">
2018-04-09 12:56:25,362 [12] INFO - RD0046681 - SomeApplication - PowerSesamm.Tests.Log4net.Log4NetLogstashIntegrrationTests - Info message
2018-04-09 12:56:25,390 [12] WARN - RD0046681 - SomeApplication - PowerSesamm.Tests.Log4net.Log4NetLogstashIntegrrationTests - Warning message
2018-04-09 12:56:25,391 [12] ERROR - RD0046681 - SomeApplication - PowerSesamm.Tests.Log4net.Log4NetLogstashIntegrrationTests - Error message
System.NotImplementedException: Test Error NotImplementedException ---> System.Exception: Error message Exception
--- End of inner exception stack trace ---
</syntaxhighlight>
The same after parsing:
[[File:Kibana-example.png|border|left|frameless|900x900px]]
Now then you run your test, on the Kibana website yo have to see a new Data, now you can configure indexes, create a view etc.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment