Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save valadan/a14251f95b8b9fb16677cc0ddd25e2a2 to your computer and use it in GitHub Desktop.
Save valadan/a14251f95b8b9fb16677cc0ddd25e2a2 to your computer and use it in GitHub Desktop.
Centralised logging for Docker containers running Spring Boot applications

Centralised logging for Docker containers running Spring Boot applications

Log flow through the system

Log flow

About

We have a bunch of Spring Boot apps running in various Docker containers and wanted a centralised place to view the logs

How

Using the syslog driver in Docker 1.6 we were able to write the container's logs to the host and forward them onto our ELK server where Logstash picks them up and processes them

Component Config

Spring Boot application

Turn off the colour in your app to make the log parsing easier:

spring.output.ansi.enabled=NEVER

Docker Container

Run with the correct logging parameters

sudo docker run -dti --log-driver=syslog --log-opt syslog-address=upd://127.0.0.1:514 --log-opt syslog-facility=local0 --log-opt syslog-tag=<name> <image>

Note that we are sending the logs to the facility local0.

Docker Host

Configure rsyslog; create a file called /etc/rsyslog.d/01-docker.conf (assuming your /etc/rsyslog.conf has the line $IncludeConfig /etc/rsyslog.d/*.config)

template(name="LongTagForwardFormat" type="string" string="<%PRI%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%")

if ($syslogfacility-text == 'local0') then {
	action(type="omfile" file="/var/log/containers")
	action(type="omfwd" Target="stgpdlog01.slu.skycdc.com" Port="514" Protocol="udp" Template="LongTagForwardFormat")
	stop
}

Here we are listening on local0, writing everything onto the host in /var/log/containers (as a backup) and also forwarding them onto the ELK server (removing the 32 character restriction on tag length).

###ELK Server Configure rsyslog; create a (receiving) file called /etc/rsyslog.d/01-docker.conf

template(name="hostname_file" type="string" string="/var/log/docker/%HOSTNAME%.log")
if ($programname == 'docker') then {
  action(type="omfile" dynaFile="hostname_file" FileCreateMode="0644" fileOwner="logstash")
  stop
}

###LogStash This is where we have to convert the syslog output into something useful:

This is our /etc/logstash/conf.d/indexer.conf:

input {
    file {
        type => "dockerlogs"
        path => [ "/var/log/docker/*.log" ]
        start_position => "end"
    }
}

filter {
    if [type] == "dockerlogs" {
        multiline {
            patterns_dir => "/etc/logstash/conf.d/patterns"
            pattern => "((%{SYSLOGANDLOG4JLOG})|(%{SYSLOGANDOTHERLOG})|(%{SYSLOGANDSPRINGBOOTLOG})|(%{SYSLOGANDAPPDYNAMICCLIENTFAIL}))"
            negate => true
            what => "previous"
        }
        grok {
            patterns_dir => "/etc/logstash/conf.d/patterns"
            match => [ "message", "%{SYSLOGANDAPPDYNAMICCLIENTFAIL}"]
            add_tag => "app_dynamics_failure"
            add_field => [ "received_at", "%{@timestamp}", "loglevel", "ERROR" ]
        }
        grok {
            patterns_dir => "/etc/logstash/conf.d/patterns"
            match => [ "message", "%{SYSLOGANDLOG4JLOG}", "message", "%{SYSLOGANDOTHERLOG}", "message", "%{SYSLOGANDSPRINGBOOTLOG}", "message", "%{SYSLOGVANILLA}" ]
            add_field => [ "received_at", "%{@timestamp}" ]
        }
        syslog_pri { }
        date {
            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
            timezone => "Europe/London"
        }
        mutate {
            replace => [ "host", "%{hostname}" ]
        }
    }
}

output {
    stdout {
        codec => rubydebug
    }
    elasticsearch {
        host => 'localhost'
        cluster => 'elasticsearch_cluster'
    }
}

And the patters used /etc/logstash/conf.d/patterns/logstash.grok:

SYSLOG_PREFIX (%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:syslog_timestamp}) %{HOSTNAME:hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])[:]?
SYSLOGVANILLA %{SYSLOG_PREFIX}%{GREEDYDATA:logmessage}
SYSLOGANDSPRINGBOOTLOG %{SYSLOG_PREFIX} %{SPRINGBOOTLOG:java_log}
SPRINGBOOT_PREFIX %{TIMESTAMP_ISO8601:java_timestamp}\s*%{WORD:loglevel} %{POSINT:process_id} --- \[\s*%{GREEDYDATA:java_thread_name}\] (?<javaclass>.{41})
SPRINGBOOTLOG %{SPRINGBOOT_PREFIX}: %{GREEDYDATA:logmessage}
SYSLOGANDLOG4JLOG %{SYSLOG_PREFIX} log4j:%{WORD:loglevel} %{GREEDYDATA:logmessage}
SYSLOGANDOTHERLOG %{SYSLOG_PREFIX} (?:\[%{WORD:loglevel}\]:) %{GREEDYDATA:logmessage}
SYSLOGANDAPPDYNAMICCLIENTFAIL %{SYSLOG_PREFIX}.*%{APPDYNAMICSCLIENTFAIL:logmessage}
APPDYNAMICSCLIENTFAIL Could not start Java Agent%{GREEDYDATA}

I'm sure your googling skills will allow you to find out what all that means. The key point is that syslog is adding a timestamp to each line, which makes it tricky to determine which lines belong to a multiline log entry (a stack trace for example).

Watch out for the lack of timezones on the log entries (both from syslog and Spring Boot)

##ElasticSearch

Make sure your elastic search yaml file has the line to define the matching cluster:

cluster.name: elasticsearch_cluster

##Kibana

Make sure your Kibana instance is pointing at ElasticSearch (in config.js):

elasticsearch: "http://localhost:9200"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment