Skip to content

Instantly share code, notes, and snippets.

@cmoulliard

cmoulliard/- Secret

Created October 7, 2016 11:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save cmoulliard/bedbbc2bea22b34ed332fb4985ab8693 to your computer and use it in GitHub Desktop.
Save cmoulliard/bedbbc2bea22b34ed332fb4985ab8693 to your computer and use it in GitHub Desktop.
oc get pods
NAME READY STATUS RESTARTS AGE
docker-registry-2-5y1lu 1/1 Running 0 1h
elasticsearch-1-037ve 2/2 Running 0 47m
exposecontroller-1-71me3 1/1 Running 0 47m
fabric8-msxjk 1/1 Running 0 47m
fluentd-l0s0d 1/1 Running 0 2m
grafana-1-x5jhq 1/1 Running 0 47m
kibana-1-horen 2/2 Running 0 47m
prometheus-1-w4adn 2/2 Running 0 47m
prometheus-blackbox-expo-1-oo6wq 1/1 Running 0 47m
@cmoulliard
Copy link
Author

2016-10-07 11:54:25 +0000 [info]: reading config file path="/etc/fluent/fluent.conf"
2016-10-07 11:54:25 +0000 [info]: starting fluentd-0.14.1
2016-10-07 11:54:25 +0000 [info]: spawn command to main: /opt/rh/rh-ruby23/root/usr/bin/ruby -Eascii-8bit:ascii-8bit /usr/bin/fluentd --under-supervisor
2016-10-07 11:54:28 +0000 [info]: reading config file path="/etc/fluent/fluent.conf"
2016-10-07 11:54:29 +0000 [info]: starting fluentd-0.14.1 without supervision
2016-10-07 11:54:29 +0000 [info]: gem 'fluent-plugin-docker_metadata_filter' version '0.1.3'
2016-10-07 11:54:29 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.5.0'
2016-10-07 11:54:29 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.24.0'
2016-10-07 11:54:29 +0000 [info]: gem 'fluent-plugin-prometheus' version '0.1.3'
2016-10-07 11:54:29 +0000 [info]: gem 'fluentd' version '0.14.1'
2016-10-07 11:54:29 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2016-10-07 11:54:33 +0000 [info]: adding filter pattern="**" type="prometheus"
2016-10-07 11:54:33 +0000 [info]: adding match pattern="**" type="elasticsearch"
2016-10-07 11:54:34 +0000 [info]: adding source type="prometheus"
2016-10-07 11:54:35 +0000 [info]: adding source type="prometheus_monitor"
2016-10-07 11:54:35 +0000 [info]: adding source type="tail"
2016-10-07 11:54:35 +0000 [info]: using configuration file: <ROOT>
  <source>
    @type prometheus
  </source>
  <source>
    @type prometheus_monitor
  </source>
  <source>
    @type tail
    path "/var/log/containers/*.log"
    pos_file "/var/log/es-containers.log.pos"
    time_format "%Y-%m-%dT%H:%M:%S.%N"
    tag "kubernetes.*"
    format json
    read_from_head true
    keep_time_key true
  </source>
  <filter kubernetes.**>
    @type kubernetes_metadata
    kubernetes_url "https://kubernetes.default.svc"
    verify_ssl true
    preserve_json_log true
  </filter>
  <filter **>
    @type prometheus
    <metric>
      name fluentd_records_total
      type counter
      desc The total number of records read by fluentd.
    </metric>
  </filter>
  <match **>
    @type elasticsearch
    @log_level "info"
    include_tag_key true
    time_key "time"
    host "elasticsearch"
    port 9200
    scheme "http"
    buffer_type "memory"
    buffer_chunk_limit 8m
    buffer_queue_limit 8192
    flush_interval 10s
    retry_limit 10
    disable_retry_limit 
    retry_wait 1s
    max_retry_wait 60s
    num_threads 4
    logstash_format true
    reload_connections false
    <buffer>
      flush_mode interval
      retry_type exponential_backoff
      @type memory
      flush_thread_count 4
      flush_interval 10s
      retry_forever 
      retry_max_times 10
      retry_max_interval 60s
      chunk_limit_size 8m
      queue_length_limit 8192
    </buffer>
  </match>
</ROOT>
2016-10-07 11:54:35 +0000 [info]: following tail of /var/log/containers/fluentd-l0s0d_default_fluentd-5f55dbcb5440be2d2498425f98b83953278d778bb52011c94c5b4d031f2338f1.log
2016-10-07 11:54:35 +0000 [info]: following tail of /var/log/containers/fluentd-l0s0d_default_POD-e555c60e821d8f5a693e88de7fa24d4b839407cffbc49c6216b63d0f49319918.log
2016-10-07 11:54:35 +0000 [info]: following tail of /var/log/containers/elasticsearch-1-037ve_default_elasticsearch-6feb27d6e7b6aa1d5d95662468a3d94492d281d44f87270bc737d417ba847b33.log
2016-10-07 11:54:35 +0000 [info]: following tail of /var/log/containers/kibana-1-horen_default_logstash-template-729de3f3978d2d5b9b177ca75963f601cbfdbb43ea5eb53d6200110c0505fc71.log
2016-10-07 11:54:37 +0000 [info]: following tail of /var/log/containers/elasticsearch-1-037ve_default_logstash-template-b18ee55fdbd32b1b64acb794c242e268b77e8f4859f2275b88f0739ae83bea61.log
2016-10-07 11:54:39 +0000 [info]: following tail of /var/log/containers/prometheus-1-w4adn_default_prometheus-bdb0bd9086ed431e03880be35ed9d71331de6ceac966221d17ae7e6eb4aa524c.log
2016-10-07 11:54:39 +0000 [info]: following tail of /var/log/containers/kibana-1-horen_default_kibana-5d6bb7dbaa631a326e40200d01d97d4d31541c10b5cec439bf8bdd214feb3235.log
2016-10-07 11:54:39 +0000 [info]: following tail of /var/log/containers/prometheus-blackbox-expo-1-oo6wq_default_blackbox-exporter-8c80807c4fd70c4b1cbb3001ddb7921a97f7ba377987a4464677ab36d6f17612.log
2016-10-07 11:54:39 +0000 [info]: following tail of /var/log/containers/prometheus-1-w4adn_default_configmap-reload-ebeaf7615aa94c52974dd622e53bd26e7d4c02ff1a389651e4236cf0e4a2c334.log
2016-10-07 11:54:39 +0000 [info]: following tail of /var/log/containers/grafana-1-x5jhq_default_grafana-23166ac5bd536516d2b6dc6dcb903357691c19a2b404897f7447b4fd671ce54e.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/exposecontroller-1-71me3_default_exposecontroller-40f9b84c25d945793114eee0abe91c603e497a2acf66c6118c36a1725e39b921.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/prometheus-blackbox-expo-1-oo6wq_default_POD-fdb0be1159419939f6c0fa62e58a290ebb48324bb9c8ee76c29386bc2a92e861.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/prometheus-1-w4adn_default_POD-16305ac50d58772734d472abdb59d06a3187e773c66e6c33452bea68dace302a.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/kibana-1-horen_default_POD-774f4cfa701d0b64a74bf77b72eeaa60b35367015068d007b1de8ecbbfb6e622.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/elasticsearch-1-037ve_default_POD-49895cc8f9fbbd3aa85c1af43771b1740c090d1dbd82c5141e16eca99beb2814.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/grafana-1-x5jhq_default_POD-54e7c834e9501b6a25b770bfa781d4158168b11a0f6f7722df476bf8dd30c06d.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/fabric8-msxjk_default_fabric8-container-2c2755491de35c62e25c8784a810804e74d0e689d94e9026e71488152d5f8c51.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/exposecontroller-1-71me3_default_POD-03932b25f966cd5c4f79e387483882280a0b563b47950318cfff15ccd9d46124.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/fabric8-msxjk_default_POD-6a289ebea98c8e54a5daf0b26c63bb73f61c2481da1628225b83a4652fb7462d.log
2016-10-07 11:54:40 +0000 [info]: following tail of /var/log/containers/docker-registry-2-5y1lu_default_registry-1e2d3e045236a336f45047f33f7e4d22eefa9352c5f4ab387de6a92c89574f26.log
2016-10-07 11:54:42 +0000 [info]: following tail of /var/log/containers/docker-registry-2-5y1lu_default_POD-2258c01fe52e68395c137d4b9d955c161c50d7ee643c136dc4761267348f0b0e.log
2016-10-07 11:54:42 +0000 [info]: following tail of /var/log/containers/router-1-x97oz_default_router-6532b60ed56bb47d371daf1994c9394fc9aa8aa982b15179ccd1ed7985f1c116.log
2016-10-07 11:54:42 +0000 [info]: following tail of /var/log/containers/router-1-x97oz_default_POD-9b74cb1e5343234aea8e2e55b9af4039cdaa1123ad91bf0d8fb905f4ff6e7e28.log
2016-10-07 11:54:42 +0000 [warn]: super was not called in #start: called it forcedly plugin=Fluent::PrometheusMonitorInput
2016-10-07 11:54:42 +0000 [warn]: super was not called in #start: called it forcedly plugin=Fluent::PrometheusInput
2016-10-07 11:54:48 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}

@cmoulliard
Copy link
Author

ElasticSearch logs

[2016-10-07 11:12:32,862][INFO ][node                     ] [The Wink] started
[2016-10-07 11:12:32,913][INFO ][gateway                  ] [The Wink] recovered [0] indices into cluster_state
[2016-10-07 11:12:40,664][INFO ][cluster.metadata         ] [The Wink] [.kibana] creating index, cause [auto(index api)], templates [kibana], shards [5]/[1], mappings [index-pattern, config]
[2016-10-07 11:12:41,329][INFO ][cluster.routing.allocation] [The Wink] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][4]] ...]).
[2016-10-07 11:12:41,418][INFO ][cluster.metadata         ] [The Wink] [.kibana] update_mapping [index-pattern]
[2016-10-07 11:12:42,223][INFO ][cluster.metadata         ] [The Wink] [.kibana] create_mapping [search]
[2016-10-07 11:12:42,566][INFO ][cluster.metadata         ] [The Wink] [.kibana] create_mapping [visualization]
[2016-10-07 11:12:42,952][INFO ][cluster.metadata         ] [The Wink] [.kibana] create_mapping [dashboard]
[2016-10-07 11:12:43,030][INFO ][cluster.metadata         ] [The Wink] [.kibana] update_mapping [config]
[2016-10-07 11:54:50,221][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_, fluentd]
[2016-10-07 11:54:50,678][INFO ][cluster.routing.allocation] [The Wink] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2016.10.07][4]] ...]).
[2016-10-07 11:54:50,715][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:50,801][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:51,854][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:52,010][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:52,099][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:52,140][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:52,217][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:52,274][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:52,369][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:52,418][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:52,581][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]
[2016-10-07 11:54:53,022][INFO ][cluster.metadata         ] [The Wink] [logstash-2016.10.07] update_mapping [fluentd]

@cmoulliard
Copy link
Author

oc get -o wide pods
NAME                               READY     STATUS      RESTARTS   AGE       IP              NODE
docker-registry-2-5y1lu            1/1       Running     1          1h        172.17.0.4      minishift
elasticsearch-1-037ve              2/2       Running     2          1h        172.17.0.5      minishift
exposecontroller-1-71me3           1/1       Running     2          1h        172.17.0.6      minishift
fabric8-msxjk                      1/1       Running     2          1h        172.17.0.3      minishift
fluentd-l0s0d                      1/1       Running     2          33m       172.17.0.10     minishift
grafana-1-x5jhq                    1/1       Running     1          1h        172.17.0.2      minishift
kibana-1-horen                     2/2       Running     2          1h        172.17.0.9      minishift
prometheus-1-w4adn                 2/2       Running     4          1h        172.17.0.8      minishift
prometheus-blackbox-expo-1-oo6wq   1/1       Running     2          1h        172.17.0.7      minishift
router-1-x97oz                     1/1       Running     5          1d        192.168.64.38   minishift
swarm-client-s2i-1-build           0/1       Completed   0          4h        172.17.0.3      minishift
swarm-client-s2i-2-build           0/1       Completed   0          4h        172.17.0.3      minishift
swarm-client-s2i-3-build           0/1       Completed   0          4h        172.17.0.3      minishift
swarm-client-s2i-4-build           0/1       Completed   0          4h        172.17.0.3      minishift
swarm-client-s2i-5-build           0/1       Error       0          4h        <none>          minishift
dabou-macosx:~/Temp$ oc get -o wide services
NAME                    CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE       SELECTOR
blackbox-exporter       172.30.30.235    <nodes>       80/TCP                    1h        group=io.fabric8.devops.apps,project=prometheus-blackbox-exporter,provider=fabric8
docker-registry         172.30.126.105   <nodes>       5000/TCP                  1d        docker-registry=default
elasticsearch           172.30.230.114   <nodes>       9200/TCP                  1h        group=io.fabric8.devops.apps,project=elasticsearch,provider=fabric8
elasticsearch-masters   None             <none>        9300/TCP                  1h        group=io.fabric8.devops.apps,project=elasticsearch,provider=fabric8
fabric8                 172.30.138.129   <nodes>       80/TCP                    1h        expose=true,group=io.fabric8.apps,project=console,provider=fabric8
grafana                 172.30.174.67    <nodes>       80/TCP                    1h        group=io.fabric8.devops.apps,project=grafana,provider=fabric8
hawkular-apm-server     172.30.208.32    <none>        8080/TCP                  2h        app=hawkular-apm-server,deploymentconfig=hawkular-apm-server
keycloak                172.30.2.109     <nodes>       80/TCP                    2h        group=io.fabric8.devops.apps,project=keycloak,provider=fabric8
kibana                  172.30.176.246   <nodes>       80/TCP                    1h        group=io.fabric8.devops.apps,project=kibana,provider=fabric8
kubernetes              172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     1d        <none>
prometheus              172.30.92.102    <nodes>       80/TCP                    1h        group=io.fabric8.devops.apps,project=prometheus,provider=fabric8
router                  172.30.150.93    <none>        80/TCP,443/TCP,1936/TCP   1d        router=router

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment