Skip to content

Instantly share code, notes, and snippets.

@driprado
Created July 1, 2023 11:45
Show Gist options
  • Save driprado/3109cb61c62e16d31182e166b3967095 to your computer and use it in GitHub Desktop.
Save driprado/3109cb61c62e16d31182e166b3967095 to your computer and use it in GitHub Desktop.
FluentD Config Digest

FluentD Config Digest

https://docs.fluentd.org/configuration/config-file

Config File Location per Installation method

rpm, deb, dmg

sudo vi /etc/td-agent/td-agent.conf

gem

sudo vi /etc/fluent/fluent.conf

docker

docker run -ti --rm -v /path/to/dir:/fluentd/etc fluentd -c /fluentd/etc/<conf-file>

List of Directives

  1. source ⇒ determines the input sources
  2. match ⇒ determine the output destination
  3. filter ⇒ determine event processing
  4. system ⇒ set system-wide configuration
  5. label ⇒ group the output and filter for internal routing
  6. @include ⇒ include other files

1 source: where data comes from

  • input sources are enabled by selecting and configuring the desired input plugins using source directives.
  • you may add multiple sourceconfigurations as required.
  • each source directive must include a @type parameter to specify the input plugin to use.

Interlude: Routing

  • The source submits events to the Fluentd routing engine.
  • An event consists of three entities: tag, time and record
  • tag ⇒ string separated by dots: myapp.aceess
  • time ⇒ specified by input plugin, must be unix time format
  • record ⇒ json object .

Ex:

# http://<ip>:9880/myapp.access?json={"event":"data"}
tag: myapp.access
time: (current time)
record: {"event":"data"}

2 match: tell fluentd what to do with events from source

  • looks for events with matching tags and processes them.
  • The most common use of the match directive is to output events to other systems.
  • plugins that correspond to the match directive are called output plugins
  • Each match directive must include a match pattern and a @type parameter

3 filter: event processing pipeline

  • filter directive has the same syntax as match
  • filter could be chained for processing pipeline.

Input -> filter 1 -> ... -> filter N -> Output

Ex:

# http://this.host:9880/myapp.access?json={"event":"data"}
<source>
  @type http
  port 9880
</source>

<filter myapp.access>
  @type record_transformer
  <record>
    host_param "#{Socket.gethostname}"
  </record>
</filter>

<match myapp.access>
  @type file
  path /var/log/fluent/access
</match>

1{"event":"data"} goes to record_transformer

2 record_transformer filter adds host_param field to the event

3 {"event":"data","host_param":"webserver1"} goes to the file output plugin

4 system: set sistem-wide configuration

most of system-wide configuratoin are also available via cli options

  • log_level
  • suppress_repeated_stacktrace
  • emit_error_log_interval
  • suppress_config_dump
  • without_source
  • process_name

e.g:

<system>  
  log_level error # equal to -qq cli deployment option
  without_source  # equal to --without-source cli deployment option
  ...
</system>

5 label: filter and output grouping

  • label groups filter and output for internal routing.
  • label reduces the complexity of tag handling.

e.g

<source>
  @type forward
</source>


<source>
  @type tail
  @label @SYSTEM
</source>


<filter access.**>
  @type record_transformer
  <record>
    ...
  </record>
</filter>
<match **>
  @type elasticsearch
  # ...
</match>


<label @SYSTEM>
  <filter var.log.middleware.**>
    @type grep
    # ...
  </filter>
  <match **>
    @type s3
    # ...
  </match>
</label>
  • tail events will be sent to filter grep which will match outpput s3 via label @SYSTEM.
  • forward events wihch are not labelled will be sent to filer record_transformer wihch will mmatch output elasticsearch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment