Skip to content

Instantly share code, notes, and snippets.

@emilio2hd
Last active July 7, 2022 14:56
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save emilio2hd/805ef4c9b1bedbb472f25b625844a470 to your computer and use it in GitHub Desktop.
Save emilio2hd/805ef4c9b1bedbb472f25b625844a470 to your computer and use it in GitHub Desktop.
Docker + Elasticsearch + Logstash + Kibana

Configuring Elasticsearch + Logstash + Kibana using Docker

Here, there are some configs to use Elasticsearch as a Docker container and Logstash + Kibana as well.

You will see that Elasticsearch config is separated from Logstash and Kibana because I'm assuming that you want to use the ES container to other things rather than only log analyses.

Docker instalation

Go to https://docs.docker.com/install/ and open the "Docker CE" menu, choose you OS and follow their instructions.

After install, open your terminal and type:

$ docker version

You should be able to see informations about Client and Server versions. Now type:

$ docker-compose version

It is expected to see docker-compose version as well.

Elasticsearch

Now that you are able to execute docker and docker-compose commands, create a elasticsearch folder wherever you want (As a suggestion, /home/your_user/docker/elasticsearch).

Inside of elasticsearch folder, create the folders config, esdata, plugins. Inside of config folder, create the elasticsearch.yml file with this content If you have any plugin (for example, repository-s3), copy the plugin's folder to elasticsearch/plugins folder. Ex.: elasticsearch/plugins/repository-s3

Now you need to set the right permissions to those folders, and change its ownership. The elasticsearch's User and Group ID is 1000, so get back to elasticsearch folder level (cd ..), and type:

$ sudo chown -R 1000:1000 esdata config plugins

Create the docker-compose.yml file with this content . Run elasticsearch with:

$ docker-compose up -d

You can you use docker-compose logs to se what's going on with the container.

If everything goes right, you will be able to enter:

$ curl -XGET localhost:9200

And see something like:

{
  "name" : "BtSe67_",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "aCTyezR7TXqkqLn77qZEKA",
  "version" : {
    "number" : "5.4.3",
    "build_hash" : "eed30a8",
    "build_date" : "2017-06-22T00:34:03.743Z",
    "build_snapshot" : false,
    "lucene_version" : "6.5.1"
  },
  "tagline" : "You Know, for Search"
}

Logstash & Kibana

With elastichsearch working properly, create a folder kibana in your docker folder (Ex.: /home/your_user/docker/kibana). Inside of kibana folder, create the folders config, logs/nginx, logs/rails. I didn't know for sure what would be the right permission for logs folder and subfolder, so I just set 777.

$ sudo chmod -R 777 logs 

Inside of config folder, create the logstash.conf with this content

Back to kibana folder, create the docker-compose.yml with this content

Run Kibana with:

$ docker-compose up -d kibana

Paste the log files (must be with extension .log) in those respective folders (logs/nginx, logs/rails) and run:

$ docker-compose up logstash

You will be able to see logstash parsing those file and sending to elasticsearch.

Now, acess http://localhost:5601 and you will get a page asking to configure an index pattern. Fill the Index name or pattern field with logstash-nginx-* value and select @timestamp on Time-field name dropdown and click on Create. Click on + button to add a new index, now enter logstash-rails-*, select occurrency_time and click on Create. Click on Discover and you will see the indexed content of each index (rails and nginx).

Extra Info

Logstash parse and send new logs as soon as detect new files.

If you want check the indexes created by logstash, you can enter:

printf "Listing elasticsearch indexes\n"
curl -XGET 0:9200/_cat/indices?v

To delete all indexes created by logstash, enter:

curl -XDELETE 0:9200/logstash-*
network.host: 0.0.0.0
http.cors.allow-origin: "*"
http.cors.enabled: true
script.inline: true
elasticsearch:
image: elasticsearch:5.4.3
container_name: 'elasticsearch'
restart: always
mem_limit: 750m
ports:
- 9200:9200
- 9300:9300
environment:
- "ES_JAVA_OPTS=-Xms750m -Xmx750m"
- transport.host=0.0.0.0
volumes:
- "./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
- "./esdata:/usr/share/elasticsearch/data"
- "./plugins:/usr/share/elasticsearch/plugins"
logstash:
image: logstash:5.4.3
container_name: 'logstash'
command: logstash -f /etc/logstash/conf.d/logstash.conf
external_links:
- elasticsearch:elasticsearch
volumes:
- ./config/logstash.conf:/etc/logstash/conf.d/logstash.conf
- ./logs:/tmp/logs
kibana:
image: kibana:5.4.3
container_name: 'kibana'
ports:
- '5601:5601'
external_links:
- elasticsearch:elasticsearch
input {
file {
path => "/tmp/logs/rails/*.log"
start_position => "beginning"
type => "rails-logs"
}
file {
path => "/tmp/logs/nginx/*.log"
start_position => "beginning"
type => "nginx-logs"
}
}
filter {
if [type] == "rails-logs" {
grok {
match => {"message" => "[DFEWI], \[%{TIMESTAMP_ISO8601:occurrency_time} #%{POSINT:pid}\] %{LOGLEVEL:loglevel} -- : \[%{UUID:uuid}\] %{GREEDYDATA:message}"}
overwrite => [ "message" ]
remove_field => ["uuid"]
}
}
if [type] == "nginx-logs" {
grok {
match => {"message" => "%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:/\"(?:%{URI:referrer}|-)\"|%{QS:referrer}) %{QS:agent}"}
}
date {
match => [ "timestamp", "dd/MMM/YYYY:H:m:s Z" ]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-%{[type]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment