Skip to content

Instantly share code, notes, and snippets.

@neu5ron
Last active March 11, 2021 01:31
Show Gist options
  • Save neu5ron/e3b794b54d119302f5915bc5973f499c to your computer and use it in GitHub Desktop.
Save neu5ron/e3b794b54d119302f5915bc5973f499c to your computer and use it in GitHub Desktop.
quick hack to get into any Elastic

based upon simulation from: https://github.com/OTRF/detection-hackathon-apt29

Understanding of Zeek Fields

use the following OSSEM branch

Quick Setup of ELK

if you don't have an elasticsearch setup you can use this, otherwise skip this section

download elasticsearch:

https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-linux-x86_64.tar.gz
tar -zxvf elasticsearch-7.6.2-linux-x86_64.tar.gz

cd elasticsearch-7.6.2
./bin/elasticsearch

^ keep above terminal running

open new terminal

download kibana: https://artifacts.elastic.co/downloads/kibana/kibana-7.6.2-linux-x86_64.tar.gz

tar -zxvf kibana-7.6.2-linux-x86_64.tar.gz
cd kibana-7.6.2-linux-x86_64/
./bin/kibana

Browse to http://localhost:5601

Prep Elasticsearch

skip this section if you have done this once already and go to the "Upload Zeek Data to ELK"* skip this section if your own ELK/Elastic setup" You only need to do this one time

Login/browse to your Kibana instance Go to Dev Tools (which is the wrench icon in the bottom left)

Copy and paste the following items and enter them. To enter it, either press the button that looks like a play button near top right of the input. or press Ctrl + Enter

PUT /_template/temporary_hackathon
{
  "order": 11,
  "index_patterns": [ "indexme-zeek-hackathon" ],
  "version": 2020050201,
  "settings": {
    "index": {
      "mapping": {
        "ignore_malformed": true,
        "total_fields.limit": "5000",
        "coerce": true
      }
    },
    "refresh_interval": "5s",
    "number_of_replicas": 0,
    "number_of_shards": 1
  },
  "mappings": {
    "dynamic": "true",
    "dynamic_templates": [
      {
        "strings": {
          "match_mapping_type": "string",
          "mapping": {
            "ignore_above": 12048,
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword"
              }
            }
          }
        }
      }
    ],
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "id.orig_h": {
        "type": "ip"
      },
      "id.resp_h": {
        "type": "ip"
      },
      "id.orig_p": {
        "type": "integer"
      },
      "id.resp_p": {
        "type": "integer"
      },
      "x509_log_id": {
        "type": "string"
      }
    }
  }
}
PUT _ingest/pipeline/temporary_hackathon
{
  "description" : "convert epoch to timestmap",
  "processors" : [
    {
      "date" : {
        "field" : "ts",
        "target_field" : "@timestamp",
        "formats" : ["ISO8601"],
        "if": "ctx.containsKey('ts') && ctx.containsKey('_write_ts')"
      }
    },
    {
      "date" : {
        "field" : "ts",
        "target_field" : "@timestamp",
        "formats" : ["UNIX"],
        "if": "ctx.containsKey('ts') && !(ctx.containsKey('@timestamp'))"
      }
    }
  ]
}

Upload Zeek Data to ELK...

if you already have a setup, you can just use what you have and you don't need to use this seciton However, if you don't - you can use the following to turn the combined (all in one) zeek log into an elasticsearch command to load the data.

Open the CyberChef Recipe that will allow you to extract and setup the zeek logs: make sure to right click and open in new tab! zeek logs zipped to elasticsearch upload

for Day 1 dataset: browse to the directory or download/save the file that contains the combined zeek logs, for Day 1: https://github.com/neu5ron/detection-hackathon-apt29/blob/master/datasets/day1/zeek/combined_zeek.log for Day 2 dataset: browse to the directory or download/save the file that contains the combined zeek logs, for Day 2: https://github.com/neu5ron/detection-hackathon-apt29/blob/master/datasets/day2/zeek/combined_zeek.log

for the pcaps: day 1: https://github.com/OTRF/detection-hackathon-apt29/tree/master/datasets/day1/pcaps day 2: https://github.com/OTRF/detection-hackathon-apt29/tree/master/datasets/day2/pcaps

Drag and drop the combined_zeek_logs.log into the site that opens up. Copy the output and paste it into Kibana Dev tools and enter it.

Other ways to explore the data

json to csv https://json-csv.com/ save as excel and go

brim: https://github.com/brimsec/brim

wireshark, portable version is nice if you don't want to od a full admin install: https://www.wireshark.org/download.html

just found this recently, haven't looked into it yet: https://github.com/SuperCowPowers/zat

Filebeat example

replace paths below, with the location of your zeek logs

###################### Filebeat Zeek/Corelight Configuration Example #########################
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-reference-yml.html
#----------------------------- Input Logs --------------------------------
filebeat.inputs:
- type: log
  enabled: true
  # Change this to the directory of where your Zeek logs are stored
  paths:
    - /usr/share/zeek/logs/*.log
  #json.keys_under_root: true
  #fields_under_root: true
#----------------------------- Kafka output --------------------------------
output.kafka:
  # Place your HELK IP(s) here (keep the port).
  hosts: ["<HELK-IP>:9092"]
  topic: "zeek"
  max_message_bytes: 1000000
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment