Skip to content

Instantly share code, notes, and snippets.

View zorteran's full-sized avatar

Maciej Szymczyk zorteran

View GitHub Profile
@zorteran
zorteran / Vagrantfile
Created November 29, 2022 17:15
Simple vagrant with Elastic Stack SIEM
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.define "elk" do |elk|
elk.vm.box = "ubuntu/focal64"
elk.vm.hostname = "elk.lan"
from json import loads
from confluent_kafka import Consumer
from loguru import logger
from discord_webhook import DiscordWebhook, DiscordEmbed
KAFKA_CONF = {'bootstrap.servers': "10.10.10.10:9092",
'group.id': "dev-wiaderko",
'auto.offset.reset': 'latest'} # earliest / latest
KAFKA_TOPICS = ["^es-alerts.*"]
DISCORD_WEBHOOK_URL = "https://discord.com/api/webhooks/something/somethingelse"

Looking to create a Software RAID 1 setup for your 2-disk server on Ubuntu Server 20.04?

Screen Shot 2020-06-05 at 20 55 31

Let's start with the basics: the official guide by Ubuntu (https://ubuntu.com/server/docs/install/storage) is outdated/wrong. And as of March 2021 it's possible that there's a bug as well with how the bios_grub partitions are created when using multiple disks.

Now on to the solution:

  • Select "Custom storage layout" when you reach the storage configuration step of the installer.
  • If the disks have existing partitions, click on each disk under AVAILABLE DEVICES and then select REFORMAT. This will (temporarily) wipe out the partitions.
@zorteran
zorteran / .gitlab-ci.yml
Created June 10, 2022 15:15 — forked from SHolzhauer/.gitlab-ci.yml
Python script to use in CI-CD to update custom detection rules in Elastic Stack
stages:
- test
- deploy
test_rules:
stage: test
image: python:3.8
script:
- python -V # Print out python version for debugging
- pip install virtualenv
@zorteran
zorteran / threatintel-cert_pl.json
Created February 13, 2022 17:22
CERT Polska Elasticsearch index template
//PUT _index_template/threatintel-cert_pl
{
"index_patterns": [
"threatintel-cert_pl-*"
],
"template": {
"settings": {
"index": {
"mapping": {
"total_fields": {
require 'csv'
require 'ipaddr'
table = CSV.read("/home/maciej/ipv4.txt")
ips = table.map { |row| IPAddr.new(row[0]) }
ip_to_check = IPAddr.new("1.96.1.1")
result = ips.reduce(0) { |sum, ip| sum + (ip.include?(ip_to_check) ? 1 : 0)}
@zorteran
zorteran / gist:dfe922a839ed7c437aa46dadc285799a
Created November 10, 2021 08:36
Filebeat - Haproxy module - ingest pipeline - TCP log
PUT _ingest/pipeline/filebeat-7.15.1-haproxy-log-pipeline-custom
{
"description" : "Pipeline for parsing HAProxy http, tcp and default logs. Requires the geoip plugin.",
"processors" : [
{
"set" : {
"field" : "event.ingested",
"value" : "{{_ingest.timestamp}}"
}
},
package pl.wiadrodanych.demo
import org.apache.spark.sql.DataFrame
import pl.wiadrodanych.demo.NiceApp.spark
import pl.wiadrodanych.demo.extensions.GroceryDataFrameExtensions._
object CoolApp {
def main(args: Array[String]) = {
val result = getGroceries
.filterFruits
package pl.wiadrodanych.demo.extensions
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import pl.wiadrodanych.demo.base.SparkJob.spark.implicits._
object GroceryDataFrameExtensions {
implicit class RichDataFrame(df: DataFrame) {
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import pl.wiadrodanych.demo.base.SparkJob
import pl.wiadrodanych.demo.base.SparkJob.spark.implicits._
object NiceApp {
val spark = SparkJob.spark
def main(args: Array[String]) = {
val groceries: DataFrame = getGroceries