Skip to content

Instantly share code, notes, and snippets.

Last active January 17, 2023 20:26
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save andrewkroh/c253717ebe82f2ec47fe003eda99c1dc to your computer and use it in GitHub Desktop.
Routing Filebeat data to a Fleet integration data stream

DRAFT: Routing Filebeat data to a Fleet integration data stream

This is an unofficial tutorial that may be useful to users that are in the process of migrating to to Elastic Agent and Fleet. It explains the steps to route some Filebeat data into a data stream managed by a Fleet integration package.

Install the Fleet integration

Installing a Fleet integration sets up all of its data streams and dashboards. There are two methods to install. In these examples we install the Hashicorp Vault 1.3.1 integration.

Use Kibana (easiest)

Navigate to the Fleet integration that you want to use, click on the settings tab, then click install.


Use Fleet API (advanced)

See the openapi defintion for the Fleet API in the elastic/kibana repo.

curl -X 'POST' \
  'http://localhost:5601/api/fleet/epm/packages/hashicorp_vault/1.3.1' \
  --user '<username>:<password>' \
  -H 'accept: application/json' \
  -H 'kbn-xsrf: true' \
  -H 'Content-Type: application/json' \
  -d '{
  "force": true,
  "ignore_constraints": true

Configure Filebeat

When using Elastic Agent, Fleet manages the configuration of the input for you. In this scenario you are taking responsibility of configuring the input properly and ensuring the data from Filebeat is in the format expected by the package's Ingest Node pipelines.


Filebeat needs permissions to write to the Fleet managed data streams. Create a role and assign that role to the your Filebeat users.

POST _security/role/filebeat_to_fleet_ingest
  "cluster": [
  "indices": [
      "names": [
      "privileges": [


Setup an input as you would normally with standalone Filebeat, but add some additional fields to the data. By setting @metadata.raw_index Filebeat's Elasticsearch output will write this data to the specified data stream (which is managed by the Fleet integration).

# filebeat.yml
- type: filestream
  id: hashicorp_vault-audit
    - /var/log/vault/audit*.json
  # Add fields required to route data to Fleet data stream.
  fields_under_root: true
      dataset: hashicorp_vault.audit
      type: logs
      namespace: default
    - add_fields:
        target: '@metadata'
          raw_index: logs-hashicorp_vault.audit-default

# If you are only sending to Fleet datastreams then disable Filebeat's default template and ILM setup.
setup.ilm.enabled: false
setup.template.enabled: false

NOTE: The add_fields processor requires Filebeat 8.0 in order to write to @metadata (see elastic/beats#30092). An alternative implementation that works in 7.x is to use the script processor.

    - script:
        lang: javascript
        source: |
          function process(event) {
            event.Put('@metadata._raw_index', 'logs-hashicorp_vault.audit-default');


Check that the data stream contains data. In the Kibana dev console run this command. Look for the existence of the data stream and non-zero document counts.

GET _cat/indices/*hashicorp_vault*?v
Copy link

Lokey92 commented May 25, 2022

Excellent, thanks for the proof of concept! Do you happen to know if there's plans to align Beats to the same index structure Agent integrations have in the future?

Copy link

There no plans to change it from using a single data stream. You could modify the provided Filebeat index template to include a wildcard such that separate data streams can be created.

  "index_templates" : [
      "name" : "filebeat-8.2.0",
      "index_template" : {
        "index_patterns" : [
          "filebeat-8.2.0-*.*"  # Added to allow filebeat-8.2.0-{module}.{dataset}.
        "template" : {
          "settings" : {

Then use the same method listed above to set the raw_index on data that you wanted routed to separate data streams.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment